Engineering Belief: How Rhetorical Structure Activates Cognitive Bias in Propaganda and Disinformation

Douglas S. Wilbur
Independent Researcher

Abstract

This study analyzes how Russian disinformation achieves persuasion through the interaction of linguistic, emotional, and structural mechanisms. Using quantitative content analysis of 260 verified texts, it identifies recurring patterns that link rhetorical form to cognitive bias activation. Repetition, emotional tone, and ambiguity consistently increase perceived credibility, while moral and threat framing heighten emotional engagement. Although intentional design cannot be proven, the consistency of these bias-triggering structures makes coordinated production highly probable. The analysis introduces cross-source fusion—the blending of credible and conspiratorial sources—as a key mechanism explaining how false narratives acquire legitimacy. These findings integrate cognitive and rhetorical theories of persuasion, showing how belief can be engineered through form rather than fact. Practically, the results offer tools for identifying structural patterns that reveal bias activation before disinformation spreads, advancing efforts to detect, interpret, and counter the psychological design of modern propaganda.

Key Words: disinformation; cognitive bias; propaganda; rhetorical design; cross-source fusion; persuasion


Wilbur, D. (2025). Engineering Belief: How Rhetorical Structure Activates Cognitive Bias in Propaganda and Disinformation.  Media Psychology Review. Vol. 17(1)


Douglas S Wilbur, Ph.D. (University of Missouri, School of Journalism, 2019), is a strategic communication scientist who specializes in propaganda, information warfare, strategic communication, psychological operation and others. He is a retired US Army Information Operations Officer with 4 combat deployments. Douglas is currently an independent researcher who works in the IT industry. Douglas_wilbur@yahoo.com



Introduction

In 2022, Russian propaganda outlet RT claimed Ukraine destroyed deadly pathogens like plague and anthrax at US-funded laboratories to hide a bioweapons program threatening Russia (Reuters, 2022). This false story spread quickly on social media, caused global panic, and justified military action, though fact-checkers proved it was propaganda (PolitiFact, 2022). The story used fear to trigger emotional contagion bias and increase public anxiety (Tandoc et al., 2018). It used vague “documents” to exploit source confusion bias and hide credible sources (Ecker et al., 2014). It used anti-Western themes to appeal to distrustful audiences through confirmation bias (Pennycook & Rand, 2019). These propagandists exploit cognitive biases, which are mental shortcuts causing errors in judgment (Roozenbeek & van der Linden, 2019). This manipulation erodes public trust in the media and institutions, weakening the foundations of democratic societies.

Scholarly investigations suggest that propagandists are able to exploit cognitive biases to enhance disinformation’s persuasive impact, though comprehensive models remain underdeveloped. Studies indicate that confirmation bias strengthens belief in false narratives aligned with audience ideologies, increasing their acceptance (Benkler et al., 2018). Emotional contagion bias amplifies fear-based or anger-based content, driving rapid spread across digital platforms (Khaldarova & Pantti, 2016). Source confusion bias, triggered by ambiguous or fabricated sources, leads audiences to misattribute disinformation to credible outlets (Szostek, 2017). Repetition bias reinforces perceived truth through frequent exposure, shaping public perceptions (Pomerantsev & Weiss, 2014). These findings highlight cognitive biases as mechanisms in propaganda, but the lack of integrated analyses signals a gap for further exploration.

Current research on how propagandists exploit cognitive biases remains incomplete, with efforts focused on tactics to counter disinformation effectively rather than a predictive framework. Studies have analyzed biases like confirmation, emotional contagion, or source confusion in isolation, but there is no unified model of how propagandists strategically design linguistic triggers to target these vulnerabilities across formats like news or social media. Few investigations have explored how repetition and familiarity biases interact with other cognitive mechanisms to amplify propaganda’s impact (Weeks, 2015). Additionally, novel concepts like attribution decay or cross-source fusion that could explain propaganda’s persistence remain unexamined (Matz et al., 2017). This fragmented understanding hinders the development of robust defenses against propaganda, threatening public trust and democratic stability. The present study fills these gaps through systematic analysis, offering a cohesive framework to advance communication scholarship and address this urgent challenge.

This study is relevant for scholars in multiple disciplines and strategic communication practitioners aiming to counter propaganda’s manipulation of cognitive biases. It addresses how propaganda messages can exploit mental vulnerabilities by triggering cognitive biases, offering a framework to strengthen media literacy and platform policies (Flynn et al., 2017). The study’s integration of cognitive psychology and rhetorical theory provides a novel lens to explain propaganda’s persuasive mechanisms, enhancing disciplinary knowledge (Boler & Nemorin, 2020). Its findings can bolster strategies to protect public trust and democratic stability (Tucker et al., 2018). The purpose of this mixed-methods study is to explore the exploitation of cognitive biases through linguistic triggers for propaganda texts in digital media contexts. For this research, the exploitation of cognitive biases is defined as the strategic use of mental shortcuts to manipulate audience beliefs and behaviors.

Review of Literature

Cognitive biases are systematic errors in human thinking that influence how individuals process information and make decisions. These errors stem from the brain’s reliance on mental shortcuts, known as heuristics, which simplify complex judgments but often result in inaccuracies (Tversky & Kahneman, 1974). Heuristics enable quick responses in uncertain environments, yet they introduce predictable distortions in reasoning. To explain this phenomenon, foundational theories provide insight into the underlying mechanisms. For instance, the dual-system model posits two modes of cognition: a fast, intuitive system that relies on heuristics for efficient processing and a slow, deliberative system that corrects errors but is less frequently engaged (Kahneman, 2011). This model illustrates why biases persist, as intuitive processing dominates everyday decisions. Similarly, bounded rationality theory argues that cognitive limitations compel individuals to use heuristics, prioritizing practicality over perfect accuracy (Gigerenzer, 2008). Affective intelligence theory further highlights how emotions integrate with cognition, guiding decisions through affective cues that can exacerbate biased judgments (Marcus et al., 2000). Together, these theories demonstrate how cognitive biases arise in routine reasoning, creating vulnerabilities that propagandists can exploit in mediated communication.

Prominent Biases

Propagandists strategically exploit specific cognitive biases to enhance the persuasive impact of their messages, making these biases critical variables for empirical analysis (Marwick & Lewis, 2017). Among over 100 cognitive biases, this study prioritized source confusion, repetition, familiarity, confirmation, and emotional contagion due to their frequent use in digital propaganda, their synergistic interactions that amplify persuasion, and their alignment with rhetorical strategies that shape beliefs and behaviors (Aral & Zhao, 2019; Barsade, 2002; Dechêne et al., 2010). For instance, confirmation bias is critical because it drives audiences to accept ideologically aligned disinformation, reinforcing belief persistence in polarized digital contexts. Confirmation bias may be activated by ideologically charged phrases that align with audience beliefs, such as anti-establishment rhetoric in social media posts (Kunda, 1990). Repetition bias, or the illusory truth effect, is vital for increasing perceived truth through repeated exposure, a core tactic in online propaganda campaigns.

Repetition bias can manifest through repeated claims across news articles or tweets, reinforcing false narratives such as bioweapons conspiracies (Dechêne et al., 2010). Familiarity bias enhances this by leveraging recognizable narratives, reducing scrutiny of false content across platforms (Dechêne et al., 2010). Source confusion bias is effective in digital ecosystems, where ambiguous sourcing misleads audiences into trusting false claims. Source confusion bias can arise from vague or fabricated citations, such as “unnamed experts” in news reports, leading audiences to misattribute credibility (Echterhoff et al., 2005). Emotional contagion bias spreads affective states, such as fear or anger, boosting message virality on social media. It may be induced by fear-inducing or anger-provoking language, like alarming headlines, driving rapid sharing on platforms (Barsade, 2002; Aral & Zhao, 2019). These biases are selected because they interact to create powerful cycles of persuasion, making them central in digital propaganda (Marwick & Lewis, 2017).

Beyond these core biases, propaganda texts activate additional cognitive biases through linguistic mechanisms, expanding the scope of this analysis. The availability heuristic can be triggered by vivid, memorable descriptions of events, making rare threats seem common, as in sensational headlines exaggerating risks (Tversky & Kahneman, 1973). Anchoring bias can be engaged by texts that set initial extreme claims, such as inflated statistics, to influence subsequent judgments (Epley & Gilovich, 2006). The bandwagon effect can be induced by phrases implying widespread support, like “everyone knows,” encouraging conformity in social media posts (Cialdini & Goldstein, 2004). The halo effect can influence perceptions in characterizations that generalize positive or negative traits, such as portraying sources as entirely trustworthy based on one attribute (Nisbett & Wilson, 1977). Loss aversion bias could be exploited through language emphasizing potential losses, such as threats to security, to motivate action (Kahneman & Tversky, 1979). These additional biases are measurable in texts through potential triggers such as vivid imagery or social proof language to provide a broader understanding of propaganda’s manipulative tactics.

Cognitive Biases in Persuasion and Misinformation

Research on cognitive biases in persuasion highlights their role in shaping attitude change and message acceptance. Biases lead individuals to prioritize certain message features, such as emotional tone or source authority, over factual accuracy, enhancing persuasion (Chen & Chaiken, 1999). Studies show that biased processing causes audiences to accept persuasive arguments that align with their values, even when evidence is weak (Briñol & Petty, 2009). For example, emotionally charged appeals can override critical evaluation, leading to stronger attitude shifts in advertising or political campaigns (O’Keefe, 2002). Persuasion is amplified when biases reduce scrutiny, allowing messages to influence beliefs without rigorous analysis (Chen & Chaiken, 1999). This literature underscores how cognitive biases facilitate persuasion and create opportunities for manipulation in mediated contexts by altering how audiences interpret and respond to messages.

In misinformation and disinformation, cognitive biases sustain false beliefs and hinder correction efforts. Research indicates that biases cause individuals to retain misleading information, especially when it evokes strong emotions or appears socially endorsed (Vraga & Bode, 2020). False narratives persist because biases prioritize initial impressions, making corrections less effective (Briñol & Petty, 2009). For instance, disinformation spreads rapidly when it leverages emotional or social cues, exploiting biases to bypass critical thinking (Vraga & Bode, 2020). Studies show that digital platforms amplify these effects by exposing audiences to biased content, reinforcing false beliefs (O’Keefe, 2002). This body of work highlights how cognitive biases contribute to the acceptance and spread of misinformation and disinformation, necessitating further study of their strategic use in communication.

Bias Interaction and Synergy

Cognitive biases interact dynamically in propaganda to amplify persuasion beyond the effect of individual biases. Research shows that biases combine to create synergistic effects, where one bias enhances another’s impact on belief formation (Petty et al., 2007). For example, emotionally charged messages can heighten audience receptivity to ideologically aligned content, strengthening persuasion through interconnected emotional and cognitive processes (Shen & Bigsby, 2013). Similarly, repeated exposure to a message increases its perceived validity, which in turn reduces scrutiny of its source, creating a feedback loop that reinforces false beliefs (Tormala & Petty, 2004). These interactions are particularly potent in digital propaganda, where rapid information sharing amplifies combined bias effects across platforms (Shen & Bigsby, 2013). Understanding these synergistic dynamics is crucial for analyzing how propaganda texts manipulate audiences, highlighting the need for integrated models of bias activation.

Limitations of Existing Research

Although the literature on cognitive biases in persuasion, misinformation, and propaganda strategies provides valuable insights into how mental shortcuts influence belief formation and message acceptance, several limitations constrain its scope and application, underscoring the importance of the current study. Research often examines biases in isolation, such as the illusory truth effect or emotional triggers, without integrating them into a comprehensive model that accounts for their synergistic interactions in mediated communication (Pluviano, Watt & Della Sala, 2024). This fragmentation overlooks how biases such as repetition and familiarity reinforce one another to amplify disinformation’s persistence across digital platforms (Lewandosky, Eckart & Cook, 2017). Furthermore, studies predominantly focus on audience reception, with less attention to how propagandists strategically design linguistic triggers, such as, ambiguity, repetition, and emotional valence, exploiting these biases in diverse document types, including news articles and social media posts (Boler & Nemorin, 2020).

The integration of cognitive psychology and rhetorical theory remains rare, leaving a gap in understanding the rhetorical mechanisms behind bias activation (Altay, de Araujo & Mercier, 2022).

Proposed Theoretical Construct

The present study is a mixed-methods analysis of 260 verified disinformation texts, correlating bias triggers with document types and introducing a new construct called cross-source fusion to bridge cognitive and rhetorical theories. This construct addresses a gap in a rarely integrated area to improve scholars’  understanding of how propagandists exploit cognitive biases in mediated communication.

The concept was developed through a synthesis of peer-reviewed literature on source credibility, rhetorical framing, and disinformation tactics, revealing a pattern where propagandists combine trusted and fabricated sources to create persuasive hybrids, but without a formalized term or model to describe it (Entman, 1993; Fisher, 1984). Cross-source fusion refers to the deliberate blending of credible and misleading sources in propaganda texts to enhance a narrative’s plausibility. This process involves integrating references to trusted outlets or established facts with fabricated claims, creating a hybrid message that appears legitimate and resists scrutiny (Entman, 1993). For example, a disinformation text might cite a legitimate mainstream media report on a geopolitical event while fusing it with false assertions about hidden motives, exploiting the credible source to lend authority to the misleading elements.

This construct explains how propagandists amplify persuasion by leveraging cognitive vulnerabilities, such as source confusion bias, in digital ecosystems where information fragments circulate rapidly (Fisher, 1984). By formalizing cross-source fusion, the study bridges cognitive psychology’s focus on mental shortcuts with rhetorical theory’s emphasis on message design, offering a framework to analyze propaganda’s persistence and inform countermeasures (Jamieson & Cappella, 2008).

Research Questions

The five research questions were developed through a synthesis of the study’s purpose, the gaps identified in peer-reviewed literature, and the data available from 260 verified disinformation texts. They address how propagandists exploit cognitive biases through linguistic triggers, correlate these triggers with document types, examine synergistic bias interactions, and evaluate the novel construct of cross-source fusion (Entman, 1993; Scheufele & Krause, 2019). These questions directly tackle deficiencies in current scholarship, such as fragmented analyses of biases and lack of cognitive-rhetorical integration, while leveraging the dataset to provide empirical insights (Brossard, 2013). By framing the questions as exploratory, they guide the mixed-methods analysis to advance communication scholarship and inform countermeasures, ensuring the study’s publication value.

Research Question One: How do disinformation texts integrate cognitive bias triggers through linguistic mechanisms, such as ambiguity, repetition, and emotional valence?

Research Question Two: How do bias triggers correlate with document types, such as news articles and social media posts?

Research Question Three: How do disinformation texts coordinate multiple cognitive bias triggers, such as repetition and familiarity, to construct layered persuasive appeals?

Research Question Four: How does the construct of cross-source fusion enhance understanding of disinformation’s rhetorical mechanisms in mediated communication?

Research Question Five: How does cross-source fusion blending of credible and conspiratorial information relate to the density and intensity of cognitive bias triggers in disinformation narratives?

Methods

This study employs a mixed-methods design to explore how disinformation propaganda texts exploit cognitive biases through linguistic triggers. The design follows a sequential explanatory approach, beginning with deductive quantitative content analysis to systematically identify and measure cognitive biases (e.g., confirmation, repetition) and linguistic triggers (e.g., ambiguity, emotional valence) as variables in disinformation texts. This was followed by a qualitative thematic analysis to verify the validity of these findings and uncover nuanced rhetorical patterns missed by coding (Creswell & Plano Clark, 2018). Starting with quantitative coding is justified because it provides a structured framework to test predefined variables derived from the literature, ensuring empirical rigor before qualitative exploration (Neuendorf, 2017). Although conventional wisdom may favor inductive qualitative analysis first, the deductive approach is prioritized here to establish a baseline for bias prevalence. Qualitative analysis then refines this by identifying contextual subtleties and rhetorical strategies (Tashakkori & Teddlie, 2010). This approach enhances understanding of propaganda’s persuasive mechanisms across diverse media formats. In this study, persuasive power refers to textual evidence of multiple cognitive bias triggers within a message—measured as rhetorical density and structural complexity—rather than observed changes in audience attitudes or behavior.

Sample

The study drew its dataset from the EUvsDisinfo database, a project established by the European External Action Service’s East StratCom Task Force. EUvsDisinfo systematically archives, translates, and annotates examples of pro-Kremlin disinformation appearing in multiple languages across traditional and digital media. Each entry in the database links to the original article or broadcast and includes short analyst notes summarizing the false or misleading claims. Because these materials originate from verified state-aligned outlets such as RT, Sputnik, and affiliated regional portals, the archive provides a credible and consistently curated source of propaganda texts suitable for systematic study. Using EUvsDisinfo ensures both authenticity (state-linked messaging) and comparability (uniform collection standards and temporal coverage).

A purposive sampling approach was employed to isolate materials most relevant to the study’s theoretical focus on source confusion. From the full database, articles were selected if they (a) contained identifiable textual narratives rather than video transcripts or social-media fragments. (b) addressed geopolitical themes involving the European Union, NATO, Ukraine, or the United States, (c) Exhibited rhetorical framing or attributional structures that could be meaningfully coded. This method aligns with typical qualitative content-analytic logic, prioritizing conceptual richness and analytical depth over population representativeness. The final corpus comprised (N=200) articles published between 2022 and 2025. This sample size balances breadth and manageability: it is large enough to support descriptive and correlational statistics while allowing detailed manual verification of automated codes. Prior methodological research in computational propaganda analysis indicates that 150–250 documents generally provide sufficient variance to identify dominant rhetorical patterns and stabilize frequency estimates (e.g., Barberá et al., 2021; Wilson & Starbird, 2020). The resulting dataset therefore offers both analytic reliability and interpretive depth, appropriate for a mixed-methods design.

Data Analysis

This study analyzes two types of data: cognitive biases (source confusion, repetition, familiarity, confirmation, emotional contagion, availability heuristic, anchoring, bandwagon effect, halo effect, loss aversion) and linguistic triggers (ambiguity, repetition, emotional valence, narrative framing, social proof language, hyperbolic language, authority appeals). These are listed and defined in Table One below. The study examines textual indicators of cognitive bias activation—linguistic and structural features consistent with bias triggers established in prior psychological research. It does not measure actual bias responses or persuasive effects among audiences.

We use a sequential mixed-methods design to balance quantitative precision and qualitative depth. Deductive quantitative content analysis, supported by natural language processing (NLP) techniques. First, we measured the frequency of these variables in disinformation texts, coding cognitive biases through textual indicators, such as vague references for source confusion or consensus phrases for bandwagon effect. Next, we coded for linguistic triggers through rhetorical features, such as exaggerated language or authority invocations (Pennebaker et al., 2003). NLP tools, including keyword extraction and sentiment analysis, identify recurring patterns, such as repeated phrases or emotionally charged terms, with two independent coders verifying results to ensure high intercoder reliability (κ ≥ 0.80) (Jurafsky & Martin, 2019). This approach, justified by its ability to efficiently process large text corpora while detecting bias-related markers, may risk misinterpreting stylistic nuances (Hovy & Lavid, 2010). The quantitative foundation enables systematic measurement of disinformation’s persuasive mechanisms, setting the stage for subsequent qualitative analysis.

Quantitatively, cognitive biases and linguistic triggers are coded on a 0–5 interval scale (0 = absent, 5 = highly prominent), with scores averaged to produce text-level intensity values for each variable. To ensure conservative interpretation, thresholds are applied post hoc: ≥ 3.5 indicates strong presence, 1.5–3.49 moderate, and < 1.5 absent. Descriptive statistics and chi-square tests explore relationships between variable intensity and document types (e.g., news articles, social media posts), addressing research questions on bias correlations (Bauer & Gaskell, 2008). Reliability testing on a 15% subsample (n=30 texts) targets Krippendorff’s α ≥ .80, supported by two independent coders to ensure consistency across coding rounds (Weber & Popova, 2012). Interval scale coding, justified by its ability to capture nuanced variations in bias and trigger prominence, enhances statistical analysis compared to binary coding, though it requires rigorous coder training to maintain reliability (Hayes, 2018).

Following quantitative content analysis, qualitative thematic analysis is conducted on 25% (n=50) of the disinformation texts to verify quantitative findings and identify rhetorical nuances in cognitive biases and linguistic triggers. The thematic analysis follows Braun and Clarke’s (2006) six-phase method: (1) familiarizing with data through repeated reading, (2) generating initial codes for rhetorical patterns (e.g., narrative structures, persuasive intent), (3) searching for themes across codes, (4) reviewing themes for coherence, (5) defining and naming themes, and (6) producing a report linking themes to biases and triggers (Braun & Clarke, 2006). Two independent coders iteratively develop codes, guided by grounded theory principles to ensure emergent themes reflect textual subtleties (Corbin & Strauss, 2015). Analyzing 50% of the texts is justified because it balances depth and feasibility, allowing detailed exploration of rhetorical nuances while maintaining manageability within the study’s scope, as supported by prior mixed-methods research (Guest et al., 2012). This qualitative approach verifies the validity of quantitative results and captures contextual patterns, such as strategic source blending, that NLP may overlook.

Cross-source fusion was defined as the rhetorical blending of credible institutional references (e.g., government agencies, international organizations, scientific reports) with conspiratorial or ideologically loaded claims within a single text. Coders identified fusion when credible and non-credible material appeared in close proximity or logical sequence, creating an implied relationship of verification. Each instance was rated on a 0–5 scale for integration density: 0 = none, 1–2 = isolated or partial blending, 3–4 = recurring hybrid sourcing, and 5 = continuous interweaving of credible and conspiratorial sources. This construct differs from cognitive bias variables by focusing on structural composition rather than psychological trigger type, measuring how credibility is rhetorically transferred rather than how biases are individually evoked.

Table 1: Cognitive Biases and Linguistic Triggers in Disinformation Texts

Cognitive Biases
Variable Operational Definition Hypothetical Example in Text Citation
Source Confusion Bias Misattribution of information to incorrect or vague sources Citing “unnamed experts” in a news article to support a false claim Echterhoff, Higgins, & Groll, (2005)
Repetition Bias Increased perceived truth due to repeated exposure (illusory truth effect) Repeating a false bioweapons claim across social media posts Dechêne, Stahl,  Hansen, & Wänke (2010)
Familiarity Bias Preference for previously encountered information Referencing familiar historical events to embed disinformation Dechêne, Stahl,  Hansen & Wänke, (2010)
Confirmation Bias Tendency to accept information aligning with pre-existing beliefs Anti-Western rhetoric in a blog post targeting distrustful audiences Kunda, (1990)
Emotional Contagion Bias Spread of affective states (e.g., fear, anger) through emotionally charged communication Fear-inducing headlines about geopolitical threats in news articles Barsade, (2002)
Availability Heuristic Overestimation of event likelihood due to vivid descriptions Sensational headlines exaggerating risks, e.g., “imminent global threat” Tversky & Kahneman, (1973)
Anchoring Bias Influence of initial extreme claims on subsequent judgments Inflated statistics in a news report, e.g., “millions affected by crisis” Epley & Gilovich, (2006)
Bandwagon Effect Conformity to perceived widespread beliefs Phrases implying mass support, e.g., “everyone knows” in social media posts Cialdini & Goldstein, (2004)
Halo Effect Generalization of traits based on one attribute Portraying a source as fully trustworthy based on one credible claim Nisbett, & Wilson, (1977)
Loss Aversion Bias Motivation to act due to emphasis on potential losses Language highlighting threats, e.g., “security at risk” in articles Kahneman & Tversky,(1979)

 

Linguistic Triggers
Variable Operational Definition Hypothetical Example in Text Citation
Ambiguity Use of vague or unclear language to obscure meaning or source Ambiguous phrases like “reports suggest” without specific attribution Slatcher, Chung, Pennebaker & Stone. (2007)
Repetition Repeated use of specific phrases or claims within or across texts Recurring slogans in social media posts to reinforce a narrative Dechêne, Stahl, Hansen & Wänke,(2010)
Emotional Valence Use of emotionally charged language to evoke affective responses Alarmist language in a tweet, e.g., “catastrophic threat imminent” Barsade, (2002)
Narrative Framing Structured narratives emphasizing specific perspectives Victimhood narratives in news articles, e.g., “nation under attack” Entman,(1993)
Social Proof Language Language implying widespread support or consensus Phrases like “many believe” in social media posts Cialdini & Goldstein, (2004)
Hyperbolic Language Exaggerated language to amplify urgency or impact Extreme terms like “unprecedented crisis” in headlines Tannenbaum et. al, (2015)
Authority Appeals Invoking authoritative figures or institutions to lend credibility Citing “scientists confirm” in a news report Hovland & Weiss, (1951)

 

Findings

Using a conservative threshold of ≥ 4 to mark a strong bias, analysis shows that around (61%) of the 260 disinformation texts contained at least one knownRoughly a third displayed multiple strong biases at once, indicating a multitude of psychological cues. The most common high-intensity patterns were confirmation bias and affective provocation. Source confusion (38%) and repetition bias (36%) followed closely, appearing mainly in articles that mimicked journalistic form while repeating unverified claims to signal credibility. Less frequent were familiarity bias, availability heuristic, and anchoring bias, each below (25%), suggesting they may be be a supporting rather than dominant mechanisms. Overall, strong bias activation appeared selective and purposeful.

Research Questions

Research Question One

RQ 1 asked: How do disinformation texts integrate cognitive bias triggers through linguistic mechanisms, such as ambiguity, repetition, and emotional valence?  Quantitative analysis revealed clear relationships between cognitive bias intensity and corresponding linguistic triggers (see Figure 1). A one-way analysis of variance showed that repetition bias differed significantly across texts grouped by linguistic repetition, F(2, 257) = 16.42, p < .001, η² = .11, indicating that repeated phrases and slogans were a key driver of perceived truth. Emotional contagion bias correlated strongly with negative emotional valence, r(258) = .68, p < .001, confirming that anger- and fear-laden language reliably amplified persuasion. Source confusion bias displayed a moderate relationship with ambiguity in attributional phrasing, r(258) = .47, p < .001. Together, these findings suggest that repetition, emotional tone, and vague sourcing operate as the primary linguistic mechanisms through which disinformation activates cognitive biases, enhancing both credibility and emotional engagement. As shown in Figure 1, the strongest correlations occur between repetition bias and linguistic repetition, emotional contagion bias and negative valence, and source confusion bias and ambiguity. The pattern visualized in the heatmap underscores that these pairings dominate the rhetorical landscape of the dataset, illustrating how linguistic structure and cognitive vulnerability converge to sustain persuasive impact in digital propaganda.

Figure 1: Correlations between Linguistic Triggers and Cognitive Biases

A representative instance appeared in a 2023 pro-Kremlin article alleging the existence of secret Western biolabs in Ukraine. The text claimed, “Reports suggest that U.S. scientists left behind containers of unknown substances, and many experts now believe these were part of a biological weapons program.” This sentence combines several linguistic cues that activate cognitive biases. Ambiguous attribution through phrases such as “reports suggest” and “many experts “introduces uncertainty while implying authority. The phrase “biological weapons program” injects emotional threat framing that intensifies fear and moral outrage. Repetition of this core claim across multiple articles further reinforces its perceived plausibility. Together, these features blur evidentiary boundaries and amplify affective engagement, illustrating how linguistic triggers operationalize cognitive biases to heighten persuasive impact in disinformation narratives.

Research Question Two

RQ 2 posited: How do disinformation texts exploit cognitive biases through linguistic triggers, such as ambiguity, repetition, and emotional valence?

Analysis revealed meaningful variation in cognitive bias intensity across document formats. A multivariate analysis of variance (MANOVA) comparing news articles, commentaries, and social media reposts showed a significant overall effect of document type on aggregated bias scores, Wilks’ Λ = .83, F(14, 504) = 2.67, p < .001, η² = .09. Follow-up univariate tests indicated that source confusion bias and confirmation bias were highest in news-style pieces, F(2, 257) = 6.21, p = .002, η² = .05, reflecting an effort to mimic credible journalism while promoting ideologically aligned narratives. In contrast, emotional contagion bias and repetition bias were strongest in social-media posts, F(2, 257) = 8.73, p < .001, η² = .06, consistent with the affective and viral design of short-form online content. Commentaries and op-ed formats displayed intermediate values, often combining interpretive framing with selective emotional emphasis.

A typical contrast appeared between a long-form article and its derivative social-media repost. The original piece stated, “According to official documents, Western advisers continue to manipulate Ukraine’s decision-making while denying any involvement.” The repost condensed this to, “The West controls Ukraine—everyone sees it now.” The first example demonstrates source confusion through the vague invocation of “official documents,” lending journalistic credibility to an unverified claim. The second intensifies emotional contagion and repetition, transforming cautious attribution into an accusatory slogan optimized for sharing. Together, the pair illustrates how message format alters the dominant bias mechanism: credibility through ambiguity in news contexts, and emotional resonance through simplification on social media platforms.

Research Question Three

RQ 3 stated: How do disinformation texts coordinate multiple cognitive bias triggers, such as repetition and familiarity, to construct layered persuasive appeals? A correlation and component analysis revealed that several cognitive biases cluster together, creating reinforcing patterns that magnify persuasive impact. A principal component analysis (PCA) identified two dominant dimensions explaining 64 percent of the total variance. The first, labeled Affective-Reinforcement Bias, combined emotional contagion, repetition, and familiarity (loadings = .82, .71, .76), indicating that emotionally charged repetition is highly correlated with message familiarity and belief persistence. The second, Credibility-Simulation Bias, linked source confusion, confirmation, and anchoring (loadings = .79, .74, .69), reflecting attempts to mimic credible reporting through vague attribution and confident assertion. Regression results predicting overall co-occurrence of bias clusters were significant, F(2, 257) = 21.53, p < .001, R² = .29, confirming that texts containing multiple high-intensity biases were more persuasive than those with isolated bias activations.

Figure 2 visualizes these inter-bias relationships. Red nodes represent the affective-reinforcement cluster and blue nodes depict the credibility-simulation cluster. Numbers within each node show factor loadings, indicating how strongly each bias contributes to its cluster. Lines connecting the nodes show correlation coefficients (r); thicker, darker lines denote stronger associations. Reading the figure from left to right, the dense red triad illustrates how repetition, emotional contagion, and familiarity might reinforce one another, while the blue cluster highlights how confirmation and source confusion jointly simulate credibility.

Figure 2. Relationships Among Cognitive Biases

A clear example of bias synergy appeared in a widely circulated 2023 commentary about European sanctions on Russia. The article declared, “European leaders admit that sanctions are destroying their own economies, yet they continue under pressure from Washington.” This brief passage combines triggers for confirmation bias, repetition, and emotional contagion in a single frame. The claim would appeal to existing beliefs about Western hypocrisy (confirmation), reiterates an unverified economic collapse narrative (repetition), and provokes anger through emotionally charged phrasing (“destroying their own economies”). By merging these elements, the text converts ideological alignment into emotional momentum—readers are likely to feel rather than evaluate the claim. This interaction exemplifies the pattern visualized in Figure 2, where emotionally reinforcing and credibility-simulating biases co-occur to strengthen the shareability and spreadability of disinformation.

Research Question Four

RQ 4 aske: How do synergistic interactions among biases, like repetition reinforcing familiarity, amplify disinformation’s persuasive impact? Regression analysis demonstrated that the four structural–rhetorical variables significantly predicted persuasive intensity, F(4, 255) = 19.84, p < .001,  = .24. Among these predictors, cross-source fusion showed the strongest association with overall bias density (β = .63, p < .001), indicating that mixing credible and conspiratorial sources substantially increases cognitive bias activation. Narrative complexity also contributed meaningfully (β = .48, p < .001), suggesting that multi-actor story structures enhance ideological alignment. Both moral appeal (β = .42, p < .001) and threat level (β = .39, p < .001) emerged as significant emotional amplifiers.

Texts with the highest concentration of psychological cues were not randomly distributed but clustered around a small set of recurring narratives. These narratives typically framed international politics as a moral struggle between corruption and virtue, often targeting Western governments, NATO, or global elites. Emotional tone was consistently negative, marked by fear, moral outrage, and betrayal. Many presented Russia as the last bastion of cultural or moral integrity under siege by external forces. This concentration of cues suggests that bias activation was most intense in stories that personalized geopolitical conflict as existential threat or ethical collapse—narrative forms that naturally invite repetition, emotional contagion, and confirmation bias.

Figure 3 visualizes these results, with bars representing mean intensity (0–5 scale) and the red line showing standardized regression coefficients (β). The pattern highlights that while all four rhetorical features contribute to persuasion, moral and threat-based framing exert the strongest combined influence, reinforcing the role of ethical justification and existential fear as core mechanisms in disinformation’s persuasive design.

Figure 3. Structural-Rhetorical Predictors of Persuasive Intensity (N=260)

 A representative example of this dynamic appeared in a 2023 commentary titled “The West Has Declared War on Morality.” The article warned, “If traditional values fall, civilization itself will collapse—Russia alone stands to protect humanity from this decay.” The passage fuses moral appeal and existential threat framing to construct a sense of sacred duty and imminent danger. The appeal to “protect humanity” provides moral legitimacy, while the prediction of civilizational collapse transforms ideological disagreement into an urgent survival narrative. This rhetorical pairing converts abstract cultural conflict into a moral crisis, compelling emotional alignment rather than factual evaluation. It exemplifies the pattern shown in Figure 3, where high moral appeal and threat intensity co-occur with elevated cognitive bias density, producing the strongest overall persuasive effect in the dataset.

Research Question Five

Finally, RQ5 states: How does cross-source fusion blending of credible and conspiratorial information relate to the density and intensity of cognitive bias triggers in disinformation narratives? Quantitative analysis showed that cross-source fusion was the single strongest predictor of overall persuasive intensity, β= .63, p < .001, explaining a substantial portion of variance in bias activation even when controlling for other structural–rhetorical variables, F(1, 258) = 39.22, p < .001,  = .16. Texts that exhibited frequent alternation between factual and conspiratorial sources demonstrated higher mean scores across source confusion, confirmation, and anchoring biases(all r > .55, p < .001). These results confirm that hybrid sourcing is not an incidental rhetorical choice but a deliberate persuasive strategy designed to transfer credibility from legitimate references to false or ideologically charged claims. As visualized in Figure 3, cross-source fusion stands at the intersection of structural and cognitive mechanisms, functioning as the communicative bridge that activates belief while masking manipulation.

A striking example appeared in a 2023 article titled “NATO Documents Reveal Secret Biolab Network.” The text opens with an authentic citation to a World Health Organization report, followed immediately by an unverified claim that “U.S. military contractors operated these labs in violation of international law.” This juxtaposition of verifiable and fabricated material creates an evidentiary illusion: the credible source legitimizes the conspiratorial extension. The passage embodies cross-source fusion’s dual effect—simulating transparency while fostering deception. By merging institutional authority with ideological narrative, it produces a persuasive coherence that is cognitively difficult to disentangle, illustrating how hybrid sourcing strengthens disinformation’s perceived authenticity and resilience against correction.

Discussion

The findings of this study indicate that disinformation achieves persuasive power through a coordinated system of linguistic, emotional, and structural mechanisms. Across the dataset, cognitive biases such as confirmation, emotional contagion, repetition, and source confusion were consistently used through patterned rhetorical triggers rather than incidental wording. Linguistic repetition reinforced familiarity and perceived truth; emotionally charged phrasing amplified affective resonance; and vague attribution blurred evidentiary boundaries. Structural devices—particularly moral appeal and threat framing—further magnified these effects, converting ideological claims into moral imperatives and crises of survival. Together, these mechanisms suggest a coherent architecture of persuasion in which emotion, credibility cues, and moral urgency are synchronized to heighten message impact.

While the analysis cannot empirically confirm that Russian propagandists consciously engineered each message to exploit specific cognitive biases, the consistency and sophistication of these rhetorical patterns make intentional design highly probable. The recurrence of bias-triggering structures across hundreds of independently attributed texts implies shared production norms or training in persuasive technique rather than coincidence. Such uniformity suggests the presence of an underlying communicative doctrine—one that treats psychological susceptibility not as an aftereffect of propaganda, but as a design objective. In this sense, the study does not claim to prove intentional manipulation, but it demonstrates a level of rhetorical regularity that strongly supports the plausibility of coordinated bias activation as a core feature of Russian disinformation practice.

Theoretical Implications

The results of this study extend current models of propaganda and disinformation by clarifying how cognitive, emotional, and structural elements are integrated within a single persuasive system. Traditional approaches often treat misinformation as either a psychological phenomenon—in which audiences fall prey to biases—or a discursive one, emphasizing narrative and framing. The present analysis shows that these dimensions are inseparable. Rhetorical form supplies the structure through which cognitive biases can be activated, while biases, in turn, can shape how audiences process and reproduce rhetoric. This reciprocal relationship suggests that propaganda functions less as message transmission than as cognitive engineering, aligning linguistic and emotional cues with predictable mental shortcuts to maximize acceptance and minimize scrutiny.

Within this system, cross-source fusion emerges as the theoretical linchpin connecting cognitive bias theory to rhetorical practice. By interweaving credible institutional references—government agencies, international organizations, scientific reports—with conspiratorial or ideologically charged claims, propagandists construct a hybrid evidentiary field. This fusion transfers perceived legitimacy from authentic sources to fabricated narratives, effectively collapsing the distinction between verification and assertion. Psychologically, it engages source confusion, anchoring, and confirmation biases simultaneously: audiences recognize the credible cue, anchor on its authority, and then assimilate the adjoining falsehood as consistent information. Rhetorically, it simulates transparency and balance, adopting the surface conventions of journalism while subverting its epistemic norms.

Theoretically, cross-source fusion represents more than a tactic; it is a structural logic of persuasion suited to the information environment of networked media. It exploits the abundance of credible data online to hide disinformation within plausible informational ecosystems. This mechanism explains how propaganda can appear both familiar and authoritative without overt deception, and why traditional fact-checking often fails to dislodge belief once credibility has been transferred. Future research should develop cross-source fusion as a formal construct—quantifying the density, sequence, and relational structure of blended citations—to determine how it mediates between cognitive susceptibility and rhetorical design. In doing so, communication theory can move beyond models that isolate bias or narrative and toward a unified framework of strategic cognitive rhetoric, in which credibility itself becomes a manipulable resource within modern propaganda.

Practical Implications

These findings have practical value for analysts, communicators, and policymakers working to identify and counter disinformation. The evidence shows that persuasive power does not depend on new lies but on the strategic reuse of familiar forms—credible sources, emotional language, and moral framing. Countermeasures must therefore focus less on individual claims and more on recognizing structural patterns that repeatedly activate bias. First, monitoring systems should flag texts that mix verified institutional citations with speculative or conspiratorial conclusions. Such blending often signals cross-source fusion, the strongest predictor of persuasive elements in this study. Second, analysts should track emotional tone and repetition, as these are reliable indicators of content designed to bypass reasoning. Third, communication training for diplomats, journalists, and military personnel should include instruction on bias recognition, teaching how ambiguity and moral urgency can manipulate judgment even when facts appear accurate. Finally, public resilience efforts should emphasize cognitive awareness rather than simple refutation. Helping audiences understand how disinformation feels convincing—why repetition, fear, and authority cues work—can reduce susceptibility across platforms. The goal is not only to expose falsehoods but to interrupt the psychological mechanisms that make them persuasive.

Conclusion

This study shows that disinformation is constructed through a coordinated system of cognitive, emotional, and structural mechanisms that together create persuasive coherence. Across hundreds of texts, linguistic repetition, emotional tone, moral appeal, and ambiguous sourcing aligned in consistent patterns that are likely to amplify credibility and belief. These recurring forms suggest design rather than coincidence. While the evidence cannot prove that Russian propagandists intentionally crafted content to trigger specific cognitive biases, the regularity and sophistication of these structures make intentional coordination highly probable. Even if bias activation arises indirectly—through habit, imitation, or institutional style—the result is the same: audiences experience information that feels credible, urgent, and self-confirming. This consistency underscores the value of identifying how persuasion works, not only who intends it. By linking rhetorical form to predictable psychological effects, the study offers a method for analyzing disinformation as a process rather than a set of claims.

Theoretically, the concept of cross-source fusion explains how legitimacy can be simulated without overt deception. Practically, it provides a lens for early detection and counterstrategy, showing that bias activation can be recognized in structure before it spreads through content. Whether intentional or emergent, these mechanisms reveal how modern propaganda sustains its influence in open information environments. Understanding these dynamics moves the study of disinformation beyond exposure toward the systematic dismantling of its persuasive architecture.

References

Altay, S., de Araujo, E., & Mercier, H. (2022). The disaster of misinformation: A review of research in social media. Current Opinion in Psychology, 45, Article 101317. https://doi.org/10.1016/j.copsyc.2022.101317

Aral, S., & Zhao, M. (2019). Social media sharing and the spread of misinformation. Management Science, 65(12), 5573–5587. https://doi.org/10.1287/mnsc.2019.3321

Barberá, P., Boydstun, A. E., Linn, S., McMahon, R., & Nagler, J. (2021). Automated text classification of news articles: A practical guide. Political Analysis, 29(1), 19–42. https://doi.org/10.1017/pan.2020.8

Barsade, S. G. (2002). The ripple effect: Emotional contagion and its influence on group behavior. Administrative Science Quarterly, 47(4), 644–675. https://doi.org/10.2307/3094912

Brashier, N. M., & Marsh, E. J. (2020). Judging truth. Annual Review of Psychology, 71, 499–515. https://doi.org/10.1146/annurev-psych-010419-050807

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Briñol, P., & Petty, R. E. (2009). Persuasion: Insights from the self-validation hypothesis. Advances in Experimental Social Psychology, 41, 69–118. https://doi.org/10.1016/S0065-2601(08)00402-4

Benkler, Y., Faris, R., Roberts, H., & Zuckerman, E. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press. https://doi.org/10.1093/oso/9780190923624.001.0001

Boler, M., & Nemorin, S. (2020). Propaganda and persuasion in the digital age: A critical introduction. Routledge. https://doi.org/10.4324/9781003052272

Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55, 591–621. https://doi.org/10.1146/annurev.psych.55.090902.142015

Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). Sage.

Corbin, J., & Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). Sage.

Chen, S., & Chaiken, S. (1999). The heuristic-systematic model in its broader context. In S. Chaiken & Y. Trope (Eds.), Dual-process theories in social psychology (pp. 73–96). Guilford Press.

Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect. Personality and Social Psychology Review, 14(2), 238–257. https://doi.org/10.1177/1088868309352251

Echterhoff, G., Higgins, E. T., & Groll, S. (2005). Audience-tuning effects on memory: The role of shared reality. Journal of Personality and Social Psychology, 89(3), 257–276. https://doi.org/10.1037/0022-3514.89.3.257

Ecker, U. K. H., Lewandowsky, S., & Tang, D. T. W. (2014). Explicit warnings reduce but do not eliminate the continued influence of misinformation. Memory & Cognition, 38(8), 1087–1100. https://doi.org/10.3758/MC.38.8.1087

Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x

Epley, N., & Gilovich, T. (2006). The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological Science, 17(4), 311–318. https://doi.org/10.1111/j.1467-9280.2006.01704.x

Fisher, W. R. (1984). Narration as a human communication paradigm: The case of public moral argument. Communication Monographs, 51(1), 1–22. https://doi.org/10.1080/03637758409390180

Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(S1), 127–150. https://doi.org/10.1111/pops.12394

Guest, G., MacQueen, K. M., & Namey, E. E. (2012). Applied thematic analysis. Sage.

Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1), 20–29. https://doi.org/10.1111/j.1745-6916.2008.00058.x

Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion: Psychological studies of opinion change. Yale University Press.

Hovy, E., & Lavid, J. (2010). Towards a ‘science’ of text mining: Concepts and applications. Procesamiento del Lenguaje Natural, 45, 11–22.

Jamieson, K. H., & Cappella, J. N. (2008). Echo chamber: Rush Limbaugh and the conservative media establishment. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195366822.001.0001

Khaldarova, I., & Pantti, M. (2016). Fake news: The narrative battle over the Ukrainian conflict. Journalism Practice, 10(7), 891–901. https://doi.org/10.1080/17512786.2016.1163237

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292. https://doi.org/10.2307/1914185

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480

Lewandowsky, S., Cook, J., Ecker, U. K. H., & Albarracin, D. (2022). Misinformation: Susceptibility, spread, and interventions to immunize individuals and society. Nature Medicine, 28(3), 460–467. https://doi.org/10.1038/s41591-022-01713-6

Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the post-truth era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008

Marcus, G. E., Neuman, W. R., & MacKuen, M. (2000). Affective intelligence and political judgment. University of Chicago Press.

Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society Research Institute. https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf

 

Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. (2017). Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences, 114(48), 12714–12719. https://doi.org/10.1073/pnas.1710966114

National Academy of Sciences, 110(Supplement 3), 14096–14101. https://doi.org/10.1073/pnas.1212744110

Neuendorf, K. A. (2017). The content analysis guidebook (2nd ed.). Sage.Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x

Jurafsky, D., & Martin, J. H. (2019). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition (3rd ed. draft). Stanford University.

Nisbett, R. E., & Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration of judgments. Journal of Personality and Social Psychology, 35(4), 250–256. https://doi.org/10.1037/0022-3514.35.4.250

O’Keefe, D. J. (2002). Persuasion: Theory and research (2nd ed.). Sage.

Petty, R. E., Briñol, P., & Tormala, Z. L. (2007). Thought confidence as a determinant of persuasion: The self-validation hypothesis. Journal of Personality and Social Psychology, 82(5), 722–741. https://doi.org/10.1037/0022-3514.82.5.722

Petty, R. E., & Wegener, D. T. (1998). Attitude change: Multiple roles for persuasion variables. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (4th ed., pp. 323–390). McGraw-Hill.

Pennebaker, J. W., Mehl, M. R., & Niederhoffer, K. G. (2003). Psychological aspects of natural language use: Our words, our selves. Annual Review of Psychology, 54, 547–577. https://doi.org/10.1146/annurev.psych.54.101601.145041

Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50. https://doi.org/10.1016/j.cognition.2018.06.011

Pluviano, S., Watt, C., & Della Sala, S. (2024). Don’t believe them! Reducing misinformation influence through credibility labeling. Psychological Science, 35(8), 1324–1337. https://doi.org/10.1177/0956797624124032

PolitiFact. (2022, March 25). No evidence that US-funded labs in Ukraine are biological weapons facilities, despite Russian claims. https://www.politifact.com/factchecks/2022/mar/25/tucker-carlson/no-evidence-us-funded-labs-ukraine-are-biological-w/

Pomerantsev, P., & Weiss, M. (2014). The menace of unreality: How the Kremlin weaponizes information, culture and money. Institute of Modern Russia. https://imrussia.org/media/pdf/Research/Michael_Weiss_and_Peter_Pomerantsev__The_Menace_of_Unreality.pdf

 

Reuters. (2022, March 6). Russia says Ukraine hiding U.S.-funded bioweapons program. https://www.reuters.com/world/europe/russia-says-ukraine-hiding-us-funded-bioweapons-programme-2022-03-06/

Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 1–10. https://doi.org/10.1057/s41599-019-0279-9

Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences, 116(16), 7662–7669. https://doi.org/10.1073/pnas.1805871115

Shen, L., & Bigsby, E. (2013). The effects of message features: Processing fluency and involvement. In J. P. Dillard & L. Shen (Eds.), The SAGE handbook of persuasion: Developments in theory and practice (2nd ed., pp. 73–88). Sage.

Szostek, J. (2017). The power and limits of Russia’s strategic narrative in Ukraine: The role of linkage. Perspectives on Politics, 15(2), 379–395. https://doi.org/10.1017/S153759271700007

Tashakkori, A., & Teddlie, C. (2010). Sage handbook of mixed methods in social & behavioral research (2nd ed.). Sage.

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143

Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Sage.

Tormala, Z. L., & Petty, R. E. (2004). Source credibility and attitude certainty: A metacognitive analysis of resistance to persuasion. Journal of Consumer Psychology, 14(4), 427–442. https://doi.org/10.1207/s15327663jcp1404_11

Thorson, E. (2016). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460–480. https://doi.org/10.1080/10584609.2015.1102187

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124

Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Hewlett Foundation. https://www.hewlett.org/wp-content/uploads/2018/03/Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdf

Vraga, E. K., & Bode, L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation. Political Communication, 37(1), 136–144. https://doi.org/10.1080/10584609.2019.1668896

Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65(4), 699–719. https://doi.org/10.1111/jcom.12164

Weber, R. P. (1990). Basic content analysis (2nd ed.). Sage.

Wilson, T., & Starbird, K. (2020). Cross-platform disinformation campaigns: Lessons learned and next steps. Harvard Kennedy School Misinformation Review, 1(1), 1–9. https://doi.org/10.37016/mr-2020-003

Related Articles

Back to top button