Evidence-based policymaking depends on measurements. But we lack robust, evidence-based measurements of influence operations’ spread, their effects, and the effectiveness of countermeasures needed to support community resilience and appropriate policy interventions.
To begin addressing this gap, PCIO and the Empirical Studies of Conflict Project at Princeton University convened three working groups with more than 40 researchers from North America, Europe, and Latin America, producing six studies. The project culminated in a Measurements Symposium with more than 60 participants from across the research community, government, and philanthropies. Some of the projects emerging from and funded by this initiative are below.
As a result of this work and years of investigation evaluating the feasibility of overcoming critical barriers to understanding the information environment, Carnegie and Princeton University are partnering to design the Institute for Research on the Information Environment (IRIE), an international resource to study information ecosystems that can spur evidence-based policy solutions.
Marcelo Sartori Locatelli, Josemar Caetano, Wagner Meira Jr. and Virgilio Almeida
As a major video-sharing platform, YouTube significantly influences the spread of information. With the United States and Brazil having the two highest COVID-19 recorded death tolls, researchers monitored the removal of American and Brazilian vaccine-related content on the platform. Researchers also analyzed the comments sections’ discourse and engagement. They found that American anti-vaccine content has significantly higher user-to-user engagement than similar Brazilian content. American anti-vaccine content also tends to lead to considerably more toxic and negative discussion than its pro-vaccine counterparts. Additionally, YouTube removed only approximately 16% of the anti-vaccine content with significant variation in the percentage removed between the United States (34%) and Brazil (9.8%).
Jamile Santana and Laís Martins
While Brazilian women journalists face direct attacks on Twitter, researchers see even more disturbing language used against Black and Indigenous female journalists. Besides the misogyny targeting them for being women, these groups experience personal attacks that aim to discredit their fight to end racism and support the constitutional rights of Indigenous peoples.
Jamile Santana
Twitter users who attack Brazilian journalists attempt to silence the press and delegitimize women’s intellectual capacity to practice the profession. Such attacks against female journalists focus on the women’s physical appearance, diverting attention from their journalistic agenda, and spreading false information about these professionals. Overall, Brazilian women journalists are more exposed to direct attacks on Twitter than their male colleagues. Through monitoring 200 profiles of Brazilian journalists on Twitter using a dictionary composed of derogatory words, researchers collected 7.1 million tweets with offensive content on 133 women journalists’ and 67 men journalists’ profiles. From May 1st to September 27th, 2021, researchers identified over 8.3 thousand tweets with five or more retweets or likes, which were manually verified as direct attacks.
João Guilheme Bastos dos Santos, Nina Santos, Caio Machado, Luiza Bandeira, Fernanda K. Martins, Jade Becari, Barbara Libório, Jamile Santana, Viktor Chagas, Renara Hirota, Felippe Mercurio
The year 2020 was considered the most dangerous to be a professional journalist in Brazil’s recent history. The country plummeted from a media environment considered "Open" to "Restricted", and attacks on journalists - especially on women journalists - have been a key factor in this decline. In addition to the targeting of minority groups, these influence operations against journalists have also been characterized by their cross-platform nature, as perpetrators leveraged digital platform features to coordinate harassment and spread disinformation. This research sought to understand how online violence against journalists is fostered in Brazil, how women and non-white journalists are targeted, and how these operations benefit from different platform features. We used a mixed-methods approach that included semi-structure in-depth interviews with 13 Brazilian journalists who have suffered online violence and analysis of data collected from Twitter, YouTube, and WhatsApp. The data was studied combining qualitative, network, and lexical analysis. Qualitative methods were used to double-check and interpret the attacks in our sample; while network analysis methods were used to compose networks of Twitter hashtags and YouTube recommendations to understand the clusters of actors involved. Finally, lexical analysis served for understanding different words and expressions used to attack journalists according to their gender and race. The interviews revealed a widespread perception that women and non-white journalists are more frequently targeted than their male and white counterparts. Moreover, Twitter appeared to be the most problematic platform. These perceptions were confirmed by our data analysis, which showed that, among the five journalists which were more attacked on Twitter, four were female journalists, including the one most attacked. We also found that hashtags related to attacks against media outlets are used by the same actors which support President Jair Bolsonaro campaign for re-election and criticize the Parliamentary Committee on the Pandemic that investigates failures in the government's handling of the crisis. We also found different vocabularies employed to attack journalists. The differences in these strains were particularly related to the gender and race of the journalist attacked. Beyond links connecting Twitter and YouTube, the main convergence between the attacks seems to be the textual patterns in hostile comments found on both platforms.
David A. Broniatowski
Among the goals of information operations are to change the overall information environment vis-à-vis specific actors. For example, “trolling campaigns” seek to undermine the credibility of specific public figures, leading others to distrust them and intimidating these figures into silence. To accomplish these aims, information operations frequently make use of “trolls” – malicious online actors who target verbal abuse at these figures. In Brazil, in particular, allies of Brazil’s current president have been accused of operating a “hate cabinet” – a trolling operation that targets journalists who have alleged corruption by this politician and other members of his regime. Leading approaches to detecting harmful speech, such as Google’s Perspective API, seek to identify specific messages with harmful content. While this approach is helpful in identifying content to downrank, flag, or remove, it is known to be brittle, and may miss attempts to introduce more subtle biases into the discourse. Here, we aim to develop a measure that might be used to assess how targeted information operations seek to change the overall valence, or appraisal, of specific actors. Preliminary results suggest known campaigns target female journalists more so than male journalists, and that these campaigns may leave detectable traces in overall Twitter discourse.
Tamar Mitts, Nilima Pisharody, and Jacob N. Shapiro
We study the impact of removing anti-vaccine content on social media activity. We follow 160 Facebook groups discussing COVID-19 vaccines from April 13, 2021, through September 13. 36 anti-vaccine groups were removed during our study period. Using a stacked difference-in-differences design, we find that these removals had substantial impacts on the social media activity of those engaging with those groups/channels on other platforms. In particular, Facebook removals of anti-vaccine groups led to a 10-33 percent increase in the rates of anti-vaccine rhetoric among users who linked to the removed groups on Twitter over the month after the removals. These results suggest that taking down anti-vaccine content from one platform can result in increased production of anti-vaccination content on other platforms by those most directly engaging with the removed content.
Cody Buntain, Martin Innes, Tamar Mitts, and Jacob N. Shapiro
We study the impact of the ‘Great Deplatforming’ following the January 6, 2021, insurrection on social media usage. Facebook, Twitter, and YouTube removed tens of thousands of accounts. We identify three key patterns. First, there was substantial intentional movement to alternative platforms, much of it announced on mainstream channels such as Twitter and Facebook. Second, the deplatforming triggered a sustained increase in interest about Gab, but much smaller changes in other platforms. Third, discourse on Gab shifted dramatically, increasing in volume, as one would expect given the increase in interest, but also shifting to include more hate speech and more discussions of narratives around voter fraud and censorship of right-wing ideas.
Hause Lin, Adam Berinsky, Dean Eckles, David Rand, and Gordon Pennycook
Our project sets out to develop a paradigm for using online ads to test the efficacy of interventions against misinformation on social media. Past work has found that simple prompts or “nudges” that remind people about accuracy are sufficient to improve the quality of content that people share on social media. This prompt is effective because people are often distracted from even considering whether content is accurate before they choose to share it. We are testing this approach using targeted ads on Twitter to investigate whether messaging about accuracy is sufficient to increase the quality of content that people share. These experiments will provide evidence for the feasibility of using ad campaigns to scale up the testing of interventions on social media.
Evidence-based policymaking depends on measurements. But we lack robust, evidence-based measurements of influence operations’ spread, their effects, and the effectiveness of countermeasures needed to support community resilience and appropriate policy interventions.
To begin addressing this gap, PCIO and the Empirical Studies of Conflict Project at Princeton University convened three working groups with more than 40 researchers from North America, Europe, and Latin America, producing six studies. The project culminated in a Measurements Symposium with more than 60 participants from across the research community, government, and philanthropies. Some of the projects emerging from and funded by this initiative are below.
As a result of this work and years of investigation evaluating the feasibility of overcoming critical barriers to understanding the information environment, Carnegie and Princeton University are partnering to design the Institute for Research on the Information Environment (IRIE), an international resource to study information ecosystems that can spur evidence-based policy solutions.
Marcelo Sartori Locatelli, Josemar Caetano, Wagner Meira Jr. and Virgilio Almeida
As a major video-sharing platform, YouTube significantly influences the spread of information. With the United States and Brazil having the two highest COVID-19 recorded death tolls, researchers monitored the removal of American and Brazilian vaccine-related content on the platform. Researchers also analyzed the comments sections’ discourse and engagement. They found that American anti-vaccine content has significantly higher user-to-user engagement than similar Brazilian content. American anti-vaccine content also tends to lead to considerably more toxic and negative discussion than its pro-vaccine counterparts. Additionally, YouTube removed only approximately 16% of the anti-vaccine content with significant variation in the percentage removed between the United States (34%) and Brazil (9.8%).
Jamile Santana and Laís Martins
While Brazilian women journalists face direct attacks on Twitter, researchers see even more disturbing language used against Black and Indigenous female journalists. Besides the misogyny targeting them for being women, these groups experience personal attacks that aim to discredit their fight to end racism and support the constitutional rights of Indigenous peoples.
Jamile Santana
Twitter users who attack Brazilian journalists attempt to silence the press and delegitimize women’s intellectual capacity to practice the profession. Such attacks against female journalists focus on the women’s physical appearance, diverting attention from their journalistic agenda, and spreading false information about these professionals. Overall, Brazilian women journalists are more exposed to direct attacks on Twitter than their male colleagues. Through monitoring 200 profiles of Brazilian journalists on Twitter using a dictionary composed of derogatory words, researchers collected 7.1 million tweets with offensive content on 133 women journalists’ and 67 men journalists’ profiles. From May 1st to September 27th, 2021, researchers identified over 8.3 thousand tweets with five or more retweets or likes, which were manually verified as direct attacks.
João Guilheme Bastos dos Santos, Nina Santos, Caio Machado, Luiza Bandeira, Fernanda K. Martins, Jade Becari, Barbara Libório, Jamile Santana, Viktor Chagas, Renara Hirota, Felippe Mercurio
The year 2020 was considered the most dangerous to be a professional journalist in Brazil’s recent history. The country plummeted from a media environment considered "Open" to "Restricted", and attacks on journalists - especially on women journalists - have been a key factor in this decline. In addition to the targeting of minority groups, these influence operations against journalists have also been characterized by their cross-platform nature, as perpetrators leveraged digital platform features to coordinate harassment and spread disinformation. This research sought to understand how online violence against journalists is fostered in Brazil, how women and non-white journalists are targeted, and how these operations benefit from different platform features. We used a mixed-methods approach that included semi-structure in-depth interviews with 13 Brazilian journalists who have suffered online violence and analysis of data collected from Twitter, YouTube, and WhatsApp. The data was studied combining qualitative, network, and lexical analysis. Qualitative methods were used to double-check and interpret the attacks in our sample; while network analysis methods were used to compose networks of Twitter hashtags and YouTube recommendations to understand the clusters of actors involved. Finally, lexical analysis served for understanding different words and expressions used to attack journalists according to their gender and race. The interviews revealed a widespread perception that women and non-white journalists are more frequently targeted than their male and white counterparts. Moreover, Twitter appeared to be the most problematic platform. These perceptions were confirmed by our data analysis, which showed that, among the five journalists which were more attacked on Twitter, four were female journalists, including the one most attacked. We also found that hashtags related to attacks against media outlets are used by the same actors which support President Jair Bolsonaro campaign for re-election and criticize the Parliamentary Committee on the Pandemic that investigates failures in the government's handling of the crisis. We also found different vocabularies employed to attack journalists. The differences in these strains were particularly related to the gender and race of the journalist attacked. Beyond links connecting Twitter and YouTube, the main convergence between the attacks seems to be the textual patterns in hostile comments found on both platforms.
David A. Broniatowski
Among the goals of information operations are to change the overall information environment vis-à-vis specific actors. For example, “trolling campaigns” seek to undermine the credibility of specific public figures, leading others to distrust them and intimidating these figures into silence. To accomplish these aims, information operations frequently make use of “trolls” – malicious online actors who target verbal abuse at these figures. In Brazil, in particular, allies of Brazil’s current president have been accused of operating a “hate cabinet” – a trolling operation that targets journalists who have alleged corruption by this politician and other members of his regime. Leading approaches to detecting harmful speech, such as Google’s Perspective API, seek to identify specific messages with harmful content. While this approach is helpful in identifying content to downrank, flag, or remove, it is known to be brittle, and may miss attempts to introduce more subtle biases into the discourse. Here, we aim to develop a measure that might be used to assess how targeted information operations seek to change the overall valence, or appraisal, of specific actors. Preliminary results suggest known campaigns target female journalists more so than male journalists, and that these campaigns may leave detectable traces in overall Twitter discourse.
Tamar Mitts, Nilima Pisharody, and Jacob N. Shapiro
We study the impact of removing anti-vaccine content on social media activity. We follow 160 Facebook groups discussing COVID-19 vaccines from April 13, 2021, through September 13. 36 anti-vaccine groups were removed during our study period. Using a stacked difference-in-differences design, we find that these removals had substantial impacts on the social media activity of those engaging with those groups/channels on other platforms. In particular, Facebook removals of anti-vaccine groups led to a 10-33 percent increase in the rates of anti-vaccine rhetoric among users who linked to the removed groups on Twitter over the month after the removals. These results suggest that taking down anti-vaccine content from one platform can result in increased production of anti-vaccination content on other platforms by those most directly engaging with the removed content.
Cody Buntain, Martin Innes, Tamar Mitts, and Jacob N. Shapiro
We study the impact of the ‘Great Deplatforming’ following the January 6, 2021, insurrection on social media usage. Facebook, Twitter, and YouTube removed tens of thousands of accounts. We identify three key patterns. First, there was substantial intentional movement to alternative platforms, much of it announced on mainstream channels such as Twitter and Facebook. Second, the deplatforming triggered a sustained increase in interest about Gab, but much smaller changes in other platforms. Third, discourse on Gab shifted dramatically, increasing in volume, as one would expect given the increase in interest, but also shifting to include more hate speech and more discussions of narratives around voter fraud and censorship of right-wing ideas.
Hause Lin, Adam Berinsky, Dean Eckles, David Rand, and Gordon Pennycook
Our project sets out to develop a paradigm for using online ads to test the efficacy of interventions against misinformation on social media. Past work has found that simple prompts or “nudges” that remind people about accuracy are sufficient to improve the quality of content that people share on social media. This prompt is effective because people are often distracted from even considering whether content is accurate before they choose to share it. We are testing this approach using targeted ads on Twitter to investigate whether messaging about accuracy is sufficient to increase the quality of content that people share. These experiments will provide evidence for the feasibility of using ad campaigns to scale up the testing of interventions on social media.