Using linguistic and rhetorical theory, researchers developed an improved model of machine-learning technology to detect conspiracy theory language. This report describes the results and suggests ways to counter effects of online conspiracy theories.
The increasing frequency and intensity of information aggression targeting the United States and its European allies demands more thorough consideration of concepts and practices for protecting against, resisting, and mitigating the effects of psychological manipulation and influence. Russia, in particular, often appears to use messaging and intimidation as part of its efforts to influence multiple actors and countries, including the United States and its European allies. Unfortunately, however, concepts and practices for understanding and resisting the potential effects of efforts conducted by Russia and its agents are few. To address this, United States European Command (USEUCOM) asked the RAND Corporation to identify strategies for defending against the effects of Russia’s efforts to manipulate and inappropriately influence troops, government decisionmaking, and civilians. In this report, RAND researchers describe apparent efforts conducted by Russia and its agents involving the use of information to shape Russia’s operating environment, focusing on the European context; review and apply existing research on influence and manipulation to these efforts; and draw from existing practice to describe possible defensive approaches that USEUCOM and its various partners can consider using when defending against these actions. The framework they use offers a way to conceptualize the objectives, tactics, and tools of Russian information efforts in Europe.
No image available
· 2020
Russia might try to manipulate and divide U.S. voters via social media during the U.S. political campaign season of 2020. Given these past and likely extant threats to U.S. elections, the California Governor's Office of Emergency Services asked RAND researchers to help analyze, forecast, and mitigate threats by foreign actors targeting local, state, and national elections. This report, the first of four in a series, reviews some of the research on information efforts by foreign actors, focusing mainly on Russia and online environments. This material is aimed at helping policymakers and the public understand-and mitigate-the threat of online foreign interference in national, state, and local elections.
No image available
· 2021
This dissertation addresses there being too much false information shared on social media by users during U.S. election seasons. I characterize false information on social media in stages of Production, Dissemination, and Consumption. Social media platforms have responded by developing Remove, Reduce, and, recently, Inform policies that broadly respond to each stage, respectively. This dissertation investigated potential Inform policies and how social media platforms can apply them. The most applicable policy options are summarized for Facebook, Twitter, and YouTube in short, industry white papers in the appendices. After clarifying terms used in discussing disinformation (Chapter 2), I make the case for social media platforms enacting or expanding their Inform policies to help users resist false information (Chapter 3). I then conduct a literature review of other fields with similar Inform prerogatives (such as public health, advertising, and social psychology) and apply principles social media policymakers could use in constructing their Inform policies (Chapter 4). I then offer two proofs-of-concept to apply Inform policies. I network 7 million tweets using the Louvain community detection algorithm and conduct a text analysis to demonstrate principles for how a platform could find users most exposed to false information. I recommend a framework for how platforms could decide how to locate users most exposed to disinformation in a least-biased way to avoid accusations of political bias (Chapter 5). I then apply the most promising finding from the literature review, inoculation or "pre-bunks," in a survey experiment to help respondents (n = 634) resist behavioral reactions to disinformation memes from the 2016 U.S. presidential campaign. I test how inoculations interact with emotions and how that relates to resistance to persuasion from disinformation. I find that inoculations never hurt and sometimes improved resistance to disinformation memes (Chapter 6). Finally, I suggest platforms like Facebook, Twitter, and YouTube, apply or expand Inform policies through building trust through transparency and communication with users, focusing on evidence-based, direct and indirect approaches for helping users resist false information, and applying principles of influence with what they know of their users to make attractive products to help their users resist false information.
The authors developed a repeatable process to measure the effectiveness of U.S. Central Command intelligence, surveillance, and reconnaissance operations; evaluate current performance; and plan for, influence, and resource future operations.
This report describes a database of online tools that are developed by nonprofit, civil society organizations and are designed to reduce the spread of online disinformation.
· 2021
In view of new and increasingly sophisticated threats from peer and near-peer adversaries, the authors suggest reforms to the processes by which intelligence informs the U.S. Air Force acquisition enterprise.
· 2024
Although the benefits of digital engineering might not be immediately apparent in terms of cost savings or schedule reduction, survey results indicate that it has the potential to provide significant long-term benefits to defense acquisition.
The authors of this report developed and tested the use of machine-learning methods to detect speech patterns that reflect attempts at deception or truthfulness during simulated security clearance background interviews.
No image available
· 2021
The U.S. Congress exercises oversight over federal agencies (including the U.S. Department of Defense) through various committees. Trends in what is said in these committees could signal the emergence of salient issues for policymakers. For example, if members of the House and Senate Armed Services Committees (HASC and SASC, respectively) talk about diversity in the military over several years, then it might suggest that diversity-related issues are becoming more salient. This trend could be a signal for policymakers at various levels within the Pentagon to prepare for questions from Congress about these issues. To this end, RAND researchers developed a workflow that draws on various tools for acquiring and organizing large volumes of data from HASC and SASC. In this Perspective, they describe a proof of concept for how to acquire and begin analyzing text data for policy analysis. One could use this workflow to develop a more sophisticated toolkit for analyzing congressional text data.