Center Events

Viral Deception, Polarization and Networks Workshop

November 2, 2018 | Glandt Forum, Singh Center for Nanotechnology

Friday, November 2nd and Saturday, November 3rd, 2018

Glandt Forum, Singh Center for Nanotechnology

This workshop, organized by The Warren Center, will run over two days. In the first day, five keynote speakers will survey the “lay of the land” from different perspectives. For example, how does propaganda or deceptive news spread over social networks, and what can different institutions do about it? Does viral deception cause polarization, or is it merely a reflection of it? How has viral deception and polarization affected recent election outcomes?

In the second day, leading specialists in computer science, economics and sociology will discuss different ways to think about the spread of deceptive news, its effects on important social outcomes—like misinformation and polarization—and what might social media platforms be able to do about it.

Organized by Eduard Talamàs, Michael Kearns, and Rakesh Vohra


Day 1: Keynote speakers

9:45 am – 10:00 am: Coffee and light snacks

10:00 am – 11:00 am: Alan Abramowitz, Department of Political Science, Emory University

“Polarization, Negative Partisanship and the State of American Politics”

I will discuss long-term demographic and cultural trends that have contributed to polarization and negative partisanship. The rise of negative partisanship can be seen as both a cause and a consequence of a media environment that facilitates the spread of misinformation and disinformation, especially on the right.


11:30 am – 12:30 pm: Levi Boxell, Department of Economics, Stanford University

“The Internet, Political Polarization, and the 2016 Election”

Popular narratives link the internet and social media as important drivers of political polarization. More recently, these technologies have been blamed for the exerting influence on the 2016 US election outcome. This talk will review the evidence surrounding both of these claims.


12:30 pm – 1:30 pm: Lunch

1:30 pm – 2:30 pm: Samantha Bradshaw, Computational Propaganda Project, Oxford University

“Why Does Disinformation Spread So Quickly on Social Media?”

The manipulation of public opinion over social media platforms is a critical concern of the 21st century. Over the past few years, there have been several attempts by foreign operatives, political parties, and populist movements to manipulate the outcome of elections by spreading disinformation to voters at key moments during public life. By co-opting the advertising infrastructure, algorithms, and the user agreements that support social media platforms, disinformation has been leveraged to sow discord, dissent, and division among citizens in democracies around the world. This talk will examine the drivers of disinformation, focusing on particular examples from the United States between 2016-2018, and will discuss the consequences of this phenomenon for democracy and political participation.


3:00 pm – 4:00 pm: Eugene Kiely, Executive Director of

“Combating Viral Deceptions: Stories from the front lines”

“Fake News” is nothing new. has been combating viral deceptions since December 2007 with the introduction of Ask FactCheck to swat down chain emails that circulated among family and friends. Now, viral deceptions have exploded with the rise of the internet and social media. works with Facebook to curb the spread of misinformation on its platform. We will hear about the various forms of viral deceptions that circulating on the web,’s work with Facebook, and the extent to which fact-checking is or is not successful.


4:30 pm – 5:30 pm: Katerina Eva Matsa, Pew Research Foundation

“The News Media Landscape in a Digital, Polarized Age”

There are stark partisan divides in Americans’ attitudes about the performance of the national news media, as well the platforms and sources they turn to for news. Join Pew Research Center’s Associate Director of Journalism Research Katerina Eva Matsa as she discusses the news media landscape in an increasingly digital, polarized age.

SLIDES courtesy of Pew Research Center

6:00 pm: Dinner for all speakers


Day 2:

8:30 am – 9:00 am: Light breakfast

9:00 am – 9:45 am: Ron Berman, The Wharton School, University of Pennsylvania

“Curation Algorithms and Filter Bubbles in Social Networks”

Social platforms often use curation algorithms to match content to user tastes. Although designed to improve user experience, these algorithms have been blamed for increased polarization of consumed content. We analyze how curation algorithms impact the number of friends users follow and the quality of content generated on the network, taking into account horizontal and vertical differentiation. Although algorithms increase polarization for fixed networks, when they indirectly influence network connectivity and content quality their impact on polarization and segregation is less clear.
We find that network connectivity and content quality are strategic complements, and that introducing curation makes consumers less selective and increases connectivity. In equilibrium, content creators receive lower payoffs because they enter into a contest leading to a prisoner’s dilemma.
A perfect filtering algorithm increases content polarization and creates a filter bubble when the marginal cost of quality is low, while an algorithm focused on vertical content quality increases connectivity as well as lowers polarization and does not create a filter bubble. User surplus analysis shows that platforms can increase user surplus if they lower the marginal cost of quality investment while introducing curation, but this may lead to the unintended consequence of a filter bubble.


10:00 am – 10:45 am: Ozan Candogan, Booth School of Business, University of Chicago

“Optimal Signaling of Content Accuracy: Engagement vs. Misinformation”

This paper studies information design in social networks. We consider a setting, where agents’ actions exhibit positive local network externalities. There is uncertainty about the underlying state of the world, which impacts agents’ payoffs. The platform can choose a signaling mechanism that sends informative signals to agents upon realization of this uncertainty, thereby influencing their actions. We investigate how the platform should design its signaling mechanism to achieve a desired outcome. Although this abstract setting has many applications, we discuss our results in the context of a specific one: misinformation in social networks. Agents in a social network engage with content that is possibly erroneous. Their payoff is based on the direct satisfaction from engaging, the disutility from engaging with erroneous content, and the positive externality that they derive from engaging with the same content as their peers in the underlying social network. The platform can commit to a signaling mechanism that sends agents informative signals based on the realization of the error of the content, and influence agents’ engagement decisions.
We show that the optimal (in terms of engagement/misinformation) signaling mechanisms have a simple threshold structure: the platform recommends that agents “Engage” with the content if its error is below a threshold and recommends “Do not engage” otherwise. For the mechanism that maximizes engagement, these thresholds depend on agents’ network positions, which we capture through a novel centrality measure that we introduce. Surprisingly, in the case where the platform seeks only to minimize misinformation (regardless of the induced engagement), public signal mechanisms with identical thresholds across agents are optimal. This is in contrast to the engagement maximization setting, where when agents are heterogeneous in terms of their network positions, public signal mechanisms induce substantially lower engagement than the optimal mechanisms. We also study the frontier of the engagement/misinformation levels that can be achieved via different mechanisms and characterize when public signal mechanisms can achieve optimal tradeoffs. Finally, we supplement our theoretical findings with numerical simulations on a Facebook subgraph.


11:00 am – 11:45 am: Marcos Fernandes, Department of Economics, Stony Brook University

“Social Media Networks, Fake News, and Polarization”

We study how the structure of social media networks and the presence of fake news might affect the degree of misinformation and polarization in a society. For that, we analyze a dynamic model of opinion exchange in which individuals have imperfect information about the true state of the world and are partially bounded rational. Key to the analysis is the presence of internet bots: agents in the network that do not follow other agents and are seeded with a constant flow of biased information. We characterize how the flow of opinions evolves over time and evaluate the determinants of long-run disagreement among individuals in the network. To that end, we create a large set of heterogeneous random graphs and simulate a long information exchange process to quantify how the bots’ ability to spread fake news and the number and degree of centrality of agents susceptible to them affect misinformation and polarization in the long-run.


11:45 am – 1:00 pm: Lunch

1:00 pm – 1:45 pm: Sandra González-Bailón, Annenberg School, University of Pennsylvania

“The Backbone Structure of Audience Networks”

Measures of audience overlap between news sources give us information on the diversity of people’s media diets and the similarity of news outlets in terms of the audiences they share. This provides a way of addressing key questions like whether audiences are increasingly fragmented. In this paper, we use audience overlap estimates to build networks that we then analyze to extract the backbone – that is, the overlapping ties that are statistically significant. We argue that the analysis of this backbone structure offers metrics that can be used to compare news consumption patterns across countries, between groups, and over time. Our analytical approach offers a new way of understanding audience structures that can enable more comparative research and, thus, more empirically grounded theoretical understandings of audience behavior in an increasingly digital media environment.


2:00 pm – 2:45 pm: Evan Sadler, Department of Economics, Columbia University

“Influence Campaigns”

In a model of social learning with coarse beliefs, we find qualitatively different outcomes from those in standard models:  disagreement is generic, influence is multi-faceted, and information aggregation becomes impossible.  We obtain a natural framework in which to study echo chambers and strategies to manipulate beliefs in a population.  I characterize a multi-dimensional measure of influence, evaluate interventions, and highlight why increased polarization may help achieve certain goals.


3:00 pm – 3:45 pm: Donghee Jo,  Department of Economics, MIT

“Better the Devil You Know: An Online Field Experiment on News Consumption”

This paper investigates the causal link between the public’s self-selective exposure to like-minded partisan media and polarization. I first present a parsimonious model to formalize a traditionally neglected channel through which media selection leads to reduced polarization. In a world where the media heavily distorts signals with its own partisan preferences, familiarity with media biases is vitally important. By choosing like-minded partisan media, news consumers are exposed to familiar news sources. This may enable them to arrive at better estimates of the underlying truth, which can contribute to an alleviation of polarization. The predictions of this model are supported by experimental evidence collected from a South Korean mobile news application that I created and used to set up an RCT. The users of the app were given access to curated articles on key political issues and were regularly asked about their views on those issues. Some randomly selected users were allowed to select the news source from which to read an article; others were given randomly selected articles. The users who selected their news sources showed larger changes in their policy views and were less likely to have radical policy views—an alleviation of polarization—in comparison with those who read randomly provided articles. The belief updating and media selection patterns are consistent with the model’s predictions, suggesting that the mechanism explained in the model is plausible. The findings suggest that the designers of news curation algorithms and their regulators should consider the readers’ familiarity with news sources and its consequences on polarization.


4:00 pm – 4:45 pm:  Srijan Kumar, Department of Computer Science, Stanford University

“Conflicts and sockpuppets on the web”

Users organize themselves into communities on the web. These communities are polarized and interact with one another, often leading to conflicts and use of multiple accounts by users. In this talk, we take a data-driven approach to investigate how communities attack one another and how the users deceptively use multiple accounts. First, we define and study inter-community attacks on Reddit using 40 months of comments made on the platform. We find that conflicts are initiated by a handful of toxic communities and are marked by formation of echo chambers. Not surprisingly, they are detrimental for the attacked community. We then create a LSTM model to predict if a post will lead to a conflict with high accuracy. Next, we study thousands of users that deceptively use multiple accounts called sockpuppets in online discussions. We find characteristic markers in the activity, writing style, and network structure of sockpuppet accounts. We categorize pair of sockpuppets by their deceptiveness and supportiveness. Finally, we create a model to predict if two accounts belong to the same user. Altogether, this talk presents a data-driven view of conflict and deception in web platforms.



For more details about this event please visit: