Investigators and researchers studying influence operations face many difficult, practical questions in their work. How can they tell the difference between a coordinated influence campaign and an authentic groundswell of public sentiment? How can they responsibly and ethically expose malign activity without compromising the privacy of unwitting bystanders? How can they estimate the impacts of an influence operation?
In other investigative disciplines, perennial questions like these can be answered with the help of best practices that are disseminated via training programs, certification programs, and other institutional mechanisms. But the emerging study of influence operations does not yet have many clear-cut best practices. In a 2020 survey of the counter–influence operations community by the Partnership for Countering Influence Operations (PCIO), a third of respondents cited a lack of common research standards as one of the biggest challenges to their work.
Ideally, best practices could help bring more quality and consistency to influence operations research. Early-stage investigators could particularly benefit from access to best practices as a teaching tool, but others could also benefit. By articulating expectations in such areas as methodology, investigative techniques, and attribution, best practice guidance could provide a framework for other stakeholders (such as funders and journalists) to assess research work being done. Moreover, the process of developing and refining best practice guidance could itself serve as a valuable opportunity for practitioners to share information about and critically examine prevailing investigative methods.
Filling this gap requires a more detailed understanding of where the need for developing best practices is greatest, what specific goals best practice guidance might help achieve, and how such guidance could be shared. PCIO therefore conducted a follow-up survey in 2021. Respondents included ninety-seven counter–influence operations professionals from around the world in academia, civil society, government, media, and technology.
This survey confirms that the majority of those working in this field do not currently have access to best practice guidance, and they agree that there is a need for it. Respondents pointed to a range of goals that best practice guidance would help accomplish, including helping early-stage investigators become more mature, helping prompt more dialogue in the community about practices, and helping outside parties—such as donors or journalists—assess the quality of influence operations research. Respondents agreed that at least some of the new best practice guidance should be openly published.
Despite these areas of consensus, there was much less agreement on practical next steps. For example, our ninety-seven survey respondents suggested thirty-five ways to better publicize and share existing best practices. The most popular ideas involved holding public or private events, publishing research, and creating central repositories to make finding and accessing relevant information easier.
This lack of consensus reflects the range of activities, goals, and priorities present in the community. Just as different capabilities and resources are required to investigate influence operations in various contexts, so will different actors have different needs for best practice guidance and seek to access and share it in different ways.
Findings
Where Are Our Survey Respondents Based and What Do They Do?
Despite PCIO’s efforts to identify and attract respondents from a wide geographic area, the majority of respondents reported their locations to be in North America (60 percent) and Europe (19 percent). The remaining geographic groups had six respondents or fewer. The challenge of finding and engaging with community members outside of North America and Western Europe is one that PCIO has grappled with during our exercise to map global initiatives, our previous surveys, and our series of private events to share global perspectives on influence operations investigations.1
Despite the heavy concentration of respondents based in North America and Europe, there was a more even distribution of the geographic areas they studied (see figure 1). Almost half of all respondents stated their work had a global focus.2
Academics comprised the biggest single group with twenty-nine respondents, followed by those working in civil society (twenty-four), government and intergovernmental (twenty-two), tech (fourteen), and media (eight) (see figure 2).
The single largest focus of respondents’ work was cited as research (42 percent). The next largest focus area was investigations (11 percent), followed closely by countermeasures (10 percent) (see figure 3).
Availability of Best Practice Guidance
Just over one-third (38 percent) of respondents said they had access to written best practice guidance for investigating influence operations. Of these, only fifteen respondents said the guidance they had access to was publicly available. Of the twenty-two respondents who said the guidance they had access to was available internally, only four respondents said there were plans to publish this guidance in the future, seven respondents said they or their organizations would be willing to consider publishing internal guidance, and the remaining eleven respondents said there were no plans to make the internal guidance public.
When broken down by geographic region, respondents from Europe reported the best access to written guidance, with 53 percent saying they had access, while 34 percent of respondents from North America and 35 percent of respondents from other regions and said they had access (see figure 4).
When broken down by sector, respondents working in tech reported the highest levels of access to written guidance (57 percent) although only one respondent said this guidance was publicly available. They cited the National Democratic Institute’s guide, “Data Analytics for Social Media Monitoring.” Respondents from academia (34 percent), civil society (38 percent), and government and intergovernmental (41 percent) were less likely to have access to written guidance. Respondents working in media reported the lowest levels, with only one respondent out of eight working in the sector reporting access (see figure 5).
Where Is Best Practice Guidance Most Needed?
We established in our 2020 community surveys that a lack of best practice guidance is seen as a significant challenge faced by the community. In this new survey, 61 percent of respondents reported that they did not have access to guidance. So where is the need for best practice most acute, and are some areas already well covered?
We asked about nine areas of best practice guidance. In each of these nine areas, less than a third of respondents reported access to best practice guidance Only one area (analysis techniques) had more than a quarter of respondents reporting access.
The three areas that the fewest number of respondents identified as having existing best practice guidance—improving the diversity of subject matter investigated, approaches to attribution, and mentoring early-stage investigators—are also the areas identified by the largest numbers of respondents as having the most acute need for guidance (see figure 6).
When asked to choose just one aspect of influence operations about which respondents would most like to see best practice guidance made publicly available, almost one-third (31 percent) chose developing a rigorous investigations methodology (see figure 7). It was not possible to determine whether respondents chose this answer because they considered that there were weaknesses in their own methodology or in the methodologies of others.
Developing a rigorous methodology was also the most commonly selected answer by respondents in North America (36 percent of respondents), Europe (21 percent), and the rest of the world (25 percent). Respondents located in North America chose approaches to attribution as their second-most common response (21 percent of respondents). In Europe, mentoring early-stage investigators was also chosen by 21 percent of respondents, and in the rest of the world, analysis techniques was the second-most commonly selected answer (20 percent).
How Might Published Best Practice Guidance Help Accomplish the Outlined Goals?
Having established that most of our survey respondents did not have access to best practice guidance and believed there was a need for guidance across all areas identified in the survey, we sought to understand what respondents believed the benefits of such guidance would be.
According to our survey, best practice guidance is thought to have a wide range of substantial benefits. At least 80 percent of respondents thought that best practice guidance would help accomplish all seven goals we asked about (see figure 8). The goal chosen by most respondents was helping early-stage investigators become more mature. The next-most chosen goal was prompting more dialogue about practices in the investigative community.
Respondents felt least confident about the ability of best practice guidance to help donors make decisions on where to strategically put funding and lower barriers to entry in underserved regions of the world. However, even here, the vast majority of respondents believed that best practice guidance would help a great deal (32 percent) or somewhat (49 percent).
How to Better Share Best Practices
Once best practice guidance has been developed, the next challenge is how best to publicize and share it. Survey respondents had many different ideas on how best practices could be shared, with less consensus here than on other parts of our survey. Ten survey respondents either did not answer this question or said they didn’t know how to answer it, and one respondent said they had access to everything they felt they needed. The remaining eighty-seven respondents suggested thirty-five different ways best practice could be shared, most of which fell into seven broad categories: published reports and journals, collaboration and networking, events, a centralized repository for tools and guidance, training and education, newsletters and email distribution lists, and promotion in traditional and social media (see figure 9).
Just over one-third of respondents who answered this question suggested publishing research as a good way of sharing best practice. Specific suggestions included research papers, articles in peer-reviewed journals, manuals, handbooks, and websites. Respondents stressed the need to make these papers easy to find online and easy to download. Six respondents suggested these publications should incorporate transparent methodology sections to facilitate critical assessment of research and help early-stage investigators learn from others’ work. Some respondents emphasized the need to keep written material brief. There were concerns that lengthy publications would not be read by those who needed them most. Two respondents highlighted the need to make this material available in more languages, so that it would be more accessible to a global audience.
Collaboration and networking was the next most popular suggestion from respondents. Six respondents cited cross-sector collaboration, including Zeve Sanderson, the founding Executive Director at New York University’s Center for Social Media and Politics, who suggested increased collaboration between academics, civil society, and journalists. Three respondents cited PCIO’s work in convening the community; one suggested forming an expert task force, and another urged more collaboration between researchers and government. Networking to encourage collaborations was cited by five respondents, and there was one suggestion for a professional association to endorse best practices and share them among its members. Another called for engagement with international processes, such as the United Nations.
A quarter of respondents who answered this question suggested that events could be an effective way of sharing best practices. These suggestions ranged from small, private workshops to public or private working groups to webinars and briefings with larger audiences. Other respondents suggested either using existing conferences to include more influence operations–related events in their programming or creating new influence operations–focused conferences. While opportunities for the community to network and participate in conferences and events already exist, such as the Digital Forensic Research Lab’s annual 360/Open Summit, there is clearly a desire in the community for more dedicated events and to increase the profile of individuals’ work in existing conferences in adjacent disciplines.
Another popular suggestion was the pooling of resources, such as best practice guidance and tools, in a central location. Respondents had lots of suggestions on what such an initiative might involve, such as an organization that worked to collect and host links to tools, reports, resources, and even data sets. Some respondents suggested that such an organization might even lead on setting and maintaining standards or overarching principles to guide the work of influence operations researchers. Others suggested a website or online database where existing resources could be provided in a searchable database so users could filter the results depending on the specific areas they were interested in learning about.
Some respondents recognized the challenges posed by the creation and maintenance of such a repository, given the wide scope of work undertaken by those in the community of researching or countering influence operations. Other key questions include who might house such a repository, how best practice guidance and tools should be developed or assessed for inclusion, and by whom and how such an initiative might be funded. Additionally, if this guidance was password-protected, as some respondents suggested it should be, who should have the power to vet potential users and grant access?
Only one respondent provided a practical recommendation for implementation, which indicates that this proposal remains in its early stages. Corina Rebegea, a governance and foreign policy expert at the National Democratic Institute, suggested that donors could bring research groups together in a working group or create a repository of materials.
Thirteen respondents suggested that education, training, or mentoring could be an effective way of sharing best practices. Ideas included instructional videos, outreach to new entrants to the field, and school curricula and textbooks. Five respondents recommended a dedicated newsletter or email distribution list; four suggested promoting best practices on social media, and one recommended promotion in traditional media.
A recurring theme was the need to protect the information through making participation private, invitation-only, or password-protected for vetted users. Three respondents expressed concern about whether detailed guidance should be openly shared at all, citing a risk that this information could be used by malicious actors.
What Should Best Practices Look Like?
Seventeen respondents offered suggestions for what best practice guidance might look like, and again, there were many ideas, but there was little consensus. Six of the respondents gave suggestions that would particularly help early-stage investigators. Two of them recommended increasing the provision of formal training in influence operations–related skills to make it easier for new entrants to learn the basics and improve their skills. They expressed a concern that the barriers to entry for written guides was too high, as written guidance assumed too much prior knowledge and was hard to follow for new entrants to the field. One wanted to see more guidance on investigative techniques and writing style tips, another recommended that training should be made as simple as possible, and another advocated for standardized terminology in the field. Finally, one of the respondents recommended short and easily consumable training material that considers the diversity of scope covered by members of the community.
Four respondents made suggestions related to improving guidance on methodology. Two of them specifically highlighted the need for best practice guidance on attribution. Other suggestions included guidance on privacy and ethics, data storage, and more transparency of methodological principles and approaches.
One respondent said there were “shared, high-level questions that everyone must be able to answer before calling something an influence operation or attributing it to a state or nonstate actor.” She advocated for the importance of providing guidance on high-level questions rather than focusing on the specifics that might vary more significantly between researchers, depending on the scope of their work, skill sets, and access to data. On the other hand, Cody Buntain, an assistant professor at the University of Maryland’s College of Information Studies, said that “including an example pipeline that demonstrates an end-to-end analysis of a particular collection would be useful. A particular description of practice can be implemented in many different ways, so having some foundational example of these decisions would facilitate and accelerate this work.”
One respondent said there should be a change in focus from sharing information about influence operations to focusing on the real-world effects and the demand for influence products. Another respondent said they wanted to see resources tailored more specifically to particular target audiences, adding “our distribution methodology directly targets individuals and organizations by creating resources that meet their needs. U.S. academics, researchers, and investigators would also do well to speak with a more global community of actors who, in some instances, have best practices [that researchers in the United States] could learn from.”3
Finally, one respondent said,
“While more mature organizations cannot easily share their investigative tradecraft, due to the risk that malign actors could take advantage, there is room for them to investigate and share best practices from a black box perspective. That way, outside researchers could approach the investigation without inside access to the systems. This could also help identify opportunities for platforms to safely allow greater access to outside researchers.”4
Improving the skills of early-stage investigators and providing resources that are easy to understand have been recurring themes in this survey, as has a recognition that the work conducted by members of the influence operations research community is broad in scope. This presents a particularly difficult set of challenges when developing best practices. Is it possible to develop guidance that is both high-level enough to be relevant to a wide audience yet detailed enough to be practically useful? When funding and manpower is limited, whose need is greatest? And given the concerns raised by survey respondents about guidance being used by malign actors against the community, who decides who gets access to guidance? Despite these outstanding questions, community members overwhelmingly desire more best practice guidance and believe it would offer a wide range of significant benefits. Further study is therefore needed to determine whether and how these implementation challenges could be overcome.
Peer Review Practices
In the absence of widely available and agreed-upon best practice guidance, respondents were asked what methods they currently used to ensure quality in their own influence operations–related investigations. Three-quarters of all respondents reported using internal peer review and over half (57 percent) cited external peer review. Respondents working in media and civil society were most likely to use internal peer review, while those working in academia, civil society, and tech were most likely to use external peer review (see figure 10).
Respondents working in government were the least likely to incorporate peer review processes in their work. Three respondents in this category selected “other”; one said the question was not applicable to their work, one said reviews were conducted by partners (perhaps meaning reviewers from a separate but collaborating organization), and the other said they examined performance measures. Respondents provided additional answers after selecting “other,” including data triangulation and statistical tests, measuring impact in communities, trial and error, and fact-checking.
With the exception of respondents working in government, respondents across the board reported high levels of access to internal and external peer reviews. Peer review processes in academia are well established, but they may be less formal or well developed in other areas, and incorporating peer reviews into publication processes can pose challenges. They can add a delay to publication times, which can be problematic when there is pressure to publish quickly. Some researchers may find it more difficult to access peer reviews—for example, those who work alone or in very small teams may not have the ability to ask someone or the capacity to provide peer review for others, and those in government may find limited structures in place to enable peer reviews or struggle to find colleagues with the time, skills, and security clearances to conduct peer reviews of their work.
Conclusion
The results of this survey demonstrate that there is an overwhelming desire in the influence operations research community for more best practice guidance in a wide range of different investigative areas, as well as a belief that such guidance would have numerous tangible benefits to investigators and to those who consume and support their work. In particular, respondents believed such guidance would give most benefit to early-stage investigators seeking to improve their skills. A majority of respondents also said this guidance could facilitate dialogue about practices in the community and help donors make strategic funding decisions.
Nevertheless, a range of implementation challenges would need to be resolved to develop and disseminate best practice guidance. For example, the diversity in the scope of investigative work and in the types of individuals and organizations undertaking this work means that many different types of guidance may be needed, requiring substantial resources and careful prioritization. To be effective, best practice guidance would need to be widely promulgated to many different audiences—yet dissemination would also need to be somehow controlled (through mechanisms that don’t yet exist) to prevent exploitation by bad actors.
The range of practical ideas offered by respondents suggests a broad interest in tackling these implementation challenges. But respondents’ ideas often lacked detail and sometimes conflicted with each other, indicating that community dialogue about best practices remains at an early stage. Future efforts should dive deeper into specific practical issues—such as how to create a central repository of tools and guidance—to see whether progress can be made. While many questions remain unanswered, there is clear consensus in the community on the need for best practice guidance and the benefits it would bring.
Appendix A: Published Best Practice Guidance Cited by Survey Respondents
Twelve respondents, all of whom reported they had access to published best practice guidance, provided links to published guidance. None of these resources were provided by more than one respondent. Two other respondents who reported access to published guidance did not include links in their survey responses. The following is a list of the guidance suggested by respondents.
- Countering Disinformation: A Guide to Promoting Information Integrity
- EU Disinfo Lab
- RESIST 2 Counter Disinformation Toolkit
- The Demtech Navigator, Oxford Internet Institute, Programme on Democracy and Technology
- Henrik Twetman, Marina Paramonova, and Monika Hanley, “Social Media Monitoring: A Primer,” NATO Strategic Communications Centre of Excellence, February 12, 2021
- Field Guide to “Fake News,” First Draft
- Nick Monaco and Daniel Arnaudo, “Data Analytics for Social Media Monitoring: Guidance on Social Media Monitoring and Analysis Techniques, Tools and Methodologies,” National Democratic Institute, May 2020
- Interference 2020: Foreign Interference Attribution Tracker, Digital Forensic Research Lab and the Atlantic Council (attribution framework)
- Verification Handbook: For Disinformation and Media Manipulation, ed. Craig Silverman, European Journalism Centre
- CASOS Tools, Center for Computational Analysis of Social and Organizational Systems, Carnegie Mellon University
- Carl Miller and Chloe Colliver, “The 101 of Disinformation Detection,” Institute for Strategic Dialogue, August 13, 2020
- Carl Miller and Chloe Colliver, “Developing a Civil Society Response to Online Manipulation,” Institute for Strategic Dialogue, August 13, 2020
- Digital Sherlocks (password-protected)
- Cogsec-Collaborative, Adversarial Misinformation and Influence Tactics and Techniques (AMITT)
Notes
1 PCIO advertised the best practice survey through our daily morning media briefing, on Twitter, and in emails to our working groups between June 21 and September 30, 2021. During this time, the PCIO Morning Media Brief had between 269 and 357 subscribers. The @IOPartnership Twitter account had 2,193 followers by October 8, 2021. In total we received ninety-seven completed responses and one partial response, which was discounted from the findings.
2 Global Perspectives on Influence Operations Investigations was a series of seven private, invitation-only discussions run by PCIO in 2021 that attempted to bring in researchers from all over the world to talk about their research.
3 For this question, respondents could select multiple geographic areas for the focus of their work, meaning the total adds up to more than ninety-seven.
4 The lack of connection between Western researchers and those located in other regions was also raised as a challenge in PCIO’s research, “Global Perspectives on Influence Operations: Shared Challenges, Unequal Resources.”
5 Black box testing is defined by the U.S. National Institute of Standards and Technology as “a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied to virtually every level of software testing: unit, integration, system and acceptance.” See “Glossary: Black Box Testing,” Computer Security Resource Center, Information Technology Laboratory, National Institute of Standards and Technology, accessed July 14, 2022, https://csrc.nist.gov/glossary/term/black_box_testing.