{"id":2648,"date":"2025-10-31T19:36:50","date_gmt":"2025-10-31T19:36:50","guid":{"rendered":"https:\/\/mprcenter.org\/review\/?p=2648"},"modified":"2026-01-18T22:24:18","modified_gmt":"2026-01-18T22:24:18","slug":"engineering-belief","status":"publish","type":"post","link":"https:\/\/mprcenter.org\/review\/engineering-belief\/","title":{"rendered":"Engineering Belief: How Rhetorical Structure Activates Cognitive Bias in Propaganda and Disinformation"},"content":{"rendered":"<p><strong>Douglas S. Wilbur<\/strong><br \/>\n<em>Independent Researcher<\/em><\/p>\n<h3 style=\"text-align: left;\" align=\"center\"><b>Abstract<\/b><\/h3>\n<p>This study analyzes how Russian disinformation achieves persuasion through the interaction of linguistic, emotional, and structural mechanisms. Using quantitative content analysis of 260 verified texts, it identifies recurring patterns that link rhetorical form to cognitive bias activation. Repetition, emotional tone, and ambiguity consistently increase perceived credibility, while moral and threat framing heighten emotional engagement. Although intentional design cannot be proven, the consistency of these bias-triggering structures makes coordinated production highly probable. The analysis introduces\u00a0<strong>cross-source fusion<\/strong>\u2014the blending of credible and conspiratorial sources\u2014as a key mechanism explaining how false narratives acquire legitimacy. These findings integrate cognitive and rhetorical theories of persuasion, showing how belief can be engineered through form rather than fact. Practically, the results offer tools for identifying structural patterns that reveal bias activation before disinformation spreads, advancing efforts to detect, interpret, and counter the psychological design of modern propaganda.<\/p>\n<p>Key Words: disinformation; cognitive bias; propaganda; rhetorical design; cross-source fusion; persuasion<\/p>\n\n\t\t<div class=\"clearfix\"><\/div>\n\t\t<hr style=\"margin-top:10px; margin-bottom:10px;\" class=\"divider divider-normal\">\n\t\n\n\t\t<div class=\"tabs-shortcode tabs-wrapper container-wrapper tabs-horizontal flex-tabs is-flex-tabs-shortcodes\">\n\t\t<ul class=\"tabs\">\n\t\t<li>\n\t\t\t<a href=\"#tab-content-1\">Citation\n\t\t\t<\/a>\n\t\t<\/li>\n\t\n\t\t<li>\n\t\t\t<a href=\"#tab-content-2\">Author\n\t\t\t<\/a>\n\t\t<\/li>\n\t\n\t\t<\/ul>\n\t\n\t\t<div class=\"tab-content\" id=\"tab-content-1\">\n\t\t\t<div class=\"tab-content-wrap\"> Wilbur, D. (2025). Engineering Belief: How Rhetorical Structure Activates Cognitive Bias in Propaganda and Disinformation.\u00a0 <em>Media Psychology Review.<\/em> Vol. 17(1)<\/p>\n\n\t\t\t<\/div>\n\t\t<\/div>\n\t\n\t\t<div class=\"tab-content\" id=\"tab-content-2\">\n\t\t\t<div class=\"tab-content-wrap\"><strong><br \/>\n<a href=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Wilbur-Doug.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-2655 size-thumbnail\" src=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Wilbur-Doug-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Wilbur-Doug-150x150.jpg 150w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Wilbur-Doug-300x300.jpg 300w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Wilbur-Doug.jpg 688w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/strong><\/p>\n<p><strong>Douglas S Wilbur, Ph.D.<\/strong> (University of Missouri, School of Journalism, 2019), is a strategic communication scientist who specializes in propaganda, information warfare, strategic communication, psychological operation and others. He is a retired US Army Information Operations Officer with 4 combat deployments. Douglas is currently an independent researcher who works in the IT industry. Douglas_wilbur@yahoo.com<\/p>\n<p><em>\n\t\t<div class=\"clearfix\"><\/div>\n\t\t<hr style=\"margin-top:10px; margin-bottom:10px;\" class=\"divider divider-normal\">\n\t<\/em><\/p>\n\n\t\t\t<\/div>\n\t\t<\/div>\n\t\n\t\t\t<div class=\"clearfix\"><\/div>\n\t\t<\/div>\n\t\n\n\t\t<div class=\"clearfix\"><\/div>\n\t\t<hr style=\"margin-top:10px; margin-bottom:10px;\" class=\"divider divider-normal\">\n\t\n<h2>Introduction<\/h2>\n<p><span class=\"tie-dropcap \">I<\/span>n 2022, Russian propaganda outlet RT claimed Ukraine destroyed deadly pathogens like plague and anthrax at US-funded laboratories to hide a bioweapons program threatening Russia (Reuters, 2022). This false story spread quickly on social media, caused global panic, and justified military action, though fact-checkers proved it was propaganda (PolitiFact, 2022). The story used fear to trigger emotional contagion bias and increase public anxiety (Tandoc et al., 2018). It used vague &#8220;documents&#8221; to exploit source confusion bias and hide credible sources (Ecker et al., 2014). It used anti-Western themes to appeal to distrustful audiences through confirmation bias (Pennycook &amp; Rand, 2019). These propagandists exploit cognitive biases, which are mental shortcuts causing errors in judgment (Roozenbeek &amp; van der Linden, 2019). This manipulation erodes public trust in the media and institutions, weakening the foundations of democratic societies.<\/p>\n<p>Scholarly investigations suggest that propagandists are able to exploit cognitive biases to enhance disinformation\u2019s persuasive impact, though comprehensive models remain underdeveloped. Studies indicate that confirmation bias strengthens belief in false narratives aligned with audience ideologies, increasing their acceptance (Benkler et al., 2018). Emotional contagion bias amplifies fear-based or anger-based content, driving rapid spread across digital platforms (Khaldarova &amp; Pantti, 2016). Source confusion bias, triggered by ambiguous or fabricated sources, leads audiences to misattribute disinformation to credible outlets (Szostek, 2017). Repetition bias reinforces perceived truth through frequent exposure, shaping public perceptions (Pomerantsev &amp; Weiss, 2014). These findings highlight cognitive biases as mechanisms in propaganda, but the lack of integrated analyses signals a gap for further exploration.<\/p>\n<p>Current research on how propagandists exploit cognitive biases remains incomplete, with efforts focused on tactics to counter disinformation effectively rather than a predictive framework. Studies have analyzed biases like confirmation, emotional contagion, or source confusion in isolation, but there is no unified model of how propagandists strategically design linguistic triggers to target these vulnerabilities across formats like news or social media. Few investigations have explored how repetition and familiarity biases interact with other cognitive mechanisms to amplify propaganda\u2019s impact (Weeks, 2015). Additionally, novel concepts like attribution decay or cross-source fusion that could explain propaganda\u2019s persistence remain unexamined (Matz et al., 2017). This fragmented understanding hinders the development of robust defenses against propaganda, threatening public trust and democratic stability. The present study fills these gaps through systematic analysis, offering a cohesive framework to advance communication scholarship and address this urgent challenge.<\/p>\n<p>This study is relevant for scholars in multiple disciplines and strategic communication practitioners aiming to counter propaganda\u2019s manipulation of cognitive biases. It addresses how propaganda messages can exploit mental vulnerabilities by triggering cognitive biases, offering a framework to strengthen media literacy and platform policies (Flynn et al., 2017). The study\u2019s integration of cognitive psychology and rhetorical theory provides a novel lens to explain propaganda\u2019s persuasive mechanisms, enhancing disciplinary knowledge (Boler &amp; Nemorin, 2020). Its findings can bolster strategies to protect public trust and democratic stability (Tucker et al., 2018). The purpose of this mixed-methods study is to explore the exploitation of cognitive biases through linguistic triggers for propaganda texts in digital media contexts. For this research, the exploitation of cognitive biases is defined as the strategic use of mental shortcuts to manipulate audience beliefs and behaviors.<\/p>\n<h2><strong>Review of Literature<\/strong><\/h2>\n<p>Cognitive biases are systematic errors in human thinking that influence how individuals process information and make decisions. These errors stem from the brain&#8217;s reliance on mental shortcuts, known as heuristics, which simplify complex judgments but often result in inaccuracies (Tversky &amp; Kahneman, 1974). Heuristics enable quick responses in uncertain environments, yet they introduce predictable distortions in reasoning. To explain this phenomenon, foundational theories provide insight into the underlying mechanisms. For instance, the dual-system model posits two modes of cognition: a fast, intuitive system that relies on heuristics for efficient processing and a slow, deliberative system that corrects errors but is less frequently engaged (Kahneman, 2011). This model illustrates why biases persist, as intuitive processing dominates everyday decisions. Similarly, bounded rationality theory argues that cognitive limitations compel individuals to use heuristics, prioritizing practicality over perfect accuracy (Gigerenzer, 2008). Affective intelligence theory further highlights how emotions integrate with cognition, guiding decisions through affective cues that can exacerbate biased judgments (Marcus et al., 2000). Together, these theories demonstrate how cognitive biases arise in routine reasoning, creating vulnerabilities that propagandists can exploit in mediated communication.<\/p>\n<h3><strong>Prominent Biases<\/strong><\/h3>\n<p>Propagandists strategically exploit specific cognitive biases to enhance the persuasive impact of their messages, making these biases critical variables for empirical analysis (Marwick &amp; Lewis, 2017). Among over 100 cognitive biases, this study prioritized source confusion, repetition, familiarity, confirmation, and emotional contagion due to their frequent use in digital propaganda, their synergistic interactions that amplify persuasion, and their alignment with rhetorical strategies that shape beliefs and behaviors (Aral &amp; Zhao, 2019; Barsade, 2002; Dech\u00eane et al., 2010). For instance, confirmation bias is critical because it drives audiences to accept ideologically aligned disinformation, reinforcing belief persistence in polarized digital contexts. Confirmation bias may be activated by ideologically charged phrases that align with audience beliefs, such as anti-establishment rhetoric in social media posts (Kunda, 1990). Repetition bias, or the illusory truth effect, is vital for increasing perceived truth through repeated exposure, a core tactic in online propaganda campaigns.<\/p>\n<p>Repetition bias can manifest through repeated claims across news articles or tweets, reinforcing false narratives such as bioweapons conspiracies (Dech\u00eane et al., 2010). Familiarity bias enhances this by leveraging recognizable narratives, reducing scrutiny of false content across platforms (Dech\u00eane et al., 2010). Source confusion bias is effective in digital ecosystems, where ambiguous sourcing misleads audiences into trusting false claims. Source confusion bias can arise from vague or fabricated citations, such as \u201cunnamed experts\u201d in news reports, leading audiences to misattribute credibility (Echterhoff et al., 2005). Emotional contagion bias spreads affective states, such as fear or anger, boosting message virality on social media. It may be induced by fear-inducing or anger-provoking language, like alarming headlines, driving rapid sharing on platforms (Barsade, 2002; Aral &amp; Zhao, 2019). These biases are selected because they interact to create powerful cycles of persuasion, making them central in digital propaganda (Marwick &amp; Lewis, 2017).<\/p>\n<p>Beyond these core biases, propaganda texts activate additional cognitive biases through linguistic mechanisms, expanding the scope of this analysis. The availability heuristic can be triggered by vivid, memorable descriptions of events, making rare threats seem common, as in sensational headlines exaggerating risks (Tversky &amp; Kahneman, 1973). Anchoring bias can be engaged by texts that set initial extreme claims, such as inflated statistics, to influence subsequent judgments (Epley &amp; Gilovich, 2006). The bandwagon effect can be induced by phrases implying widespread support, like \u201ceveryone knows,\u201d encouraging conformity in social media posts (Cialdini &amp; Goldstein, 2004). The halo effect can influence perceptions in characterizations that generalize positive or negative traits, such as portraying sources as entirely trustworthy based on one attribute (Nisbett &amp; Wilson, 1977). Loss aversion bias could be exploited through language emphasizing potential losses, such as threats to security, to motivate action (Kahneman &amp; Tversky, 1979). These additional biases are measurable in texts through potential triggers such as vivid imagery or social proof language to provide a broader understanding of propaganda\u2019s manipulative tactics.<\/p>\n<h3><strong>Cognitive Biases in Persuasion and Misinformation<\/strong><\/h3>\n<p>Research on cognitive biases in persuasion highlights their role in shaping attitude change and message acceptance. Biases lead individuals to prioritize certain message features, such as emotional tone or source authority, over factual accuracy, enhancing persuasion (Chen &amp; Chaiken, 1999). Studies show that biased processing causes audiences to accept persuasive arguments that align with their values, even when evidence is weak (Bri\u00f1ol &amp; Petty, 2009). For example, emotionally charged appeals can override critical evaluation, leading to stronger attitude shifts in advertising or political campaigns (O\u2019Keefe, 2002). Persuasion is amplified when biases reduce scrutiny, allowing messages to influence beliefs without rigorous analysis (Chen &amp; Chaiken, 1999). This literature underscores how cognitive biases facilitate persuasion and create opportunities for manipulation in mediated contexts by altering how audiences interpret and respond to messages.<\/p>\n<p>In misinformation and disinformation, cognitive biases sustain false beliefs and hinder correction efforts. Research indicates that biases cause individuals to retain misleading information, especially when it evokes strong emotions or appears socially endorsed (Vraga &amp; Bode, 2020). False narratives persist because biases prioritize initial impressions, making corrections less effective (Bri\u00f1ol &amp; Petty, 2009). For instance, disinformation spreads rapidly when it leverages emotional or social cues, exploiting biases to bypass critical thinking (Vraga &amp; Bode, 2020). Studies show that digital platforms amplify these effects by exposing audiences to biased content, reinforcing false beliefs (O\u2019Keefe, 2002). This body of work highlights how cognitive biases contribute to the acceptance and spread of misinformation and disinformation, necessitating further study of their strategic use in communication.<\/p>\n<h3><strong>Bias Interaction and Synergy<\/strong><\/h3>\n<p>Cognitive biases interact dynamically in propaganda to amplify persuasion beyond the effect of individual biases. Research shows that biases combine to create synergistic effects, where one bias enhances another\u2019s impact on belief formation (Petty et al., 2007). For example, emotionally charged messages can heighten audience receptivity to ideologically aligned content, strengthening persuasion through interconnected emotional and cognitive processes (Shen &amp; Bigsby, 2013). Similarly, repeated exposure to a message increases its perceived validity, which in turn reduces scrutiny of its source, creating a feedback loop that reinforces false beliefs (Tormala &amp; Petty, 2004). These interactions are particularly potent in digital propaganda, where rapid information sharing amplifies combined bias effects across platforms (Shen &amp; Bigsby, 2013). Understanding these synergistic dynamics is crucial for analyzing how propaganda texts manipulate audiences, highlighting the need for integrated models of bias activation.<\/p>\n<h3><strong>Limitations of Existing Research<\/strong><\/h3>\n<p>Although the literature on cognitive biases in persuasion, misinformation, and propaganda strategies provides valuable insights into how mental shortcuts influence belief formation and message acceptance, several limitations constrain its scope and application, underscoring the importance of the current study. Research often examines biases in isolation, such as the illusory truth effect or emotional triggers, without integrating them into a comprehensive model that accounts for their synergistic interactions in mediated communication (Pluviano, Watt &amp; Della Sala, 2024). This fragmentation overlooks how biases such as repetition and familiarity reinforce one another to amplify disinformation&#8217;s persistence across digital platforms (Lewandosky, Eckart &amp; Cook, 2017). Furthermore, studies predominantly focus on audience reception, with less attention to how propagandists strategically design linguistic triggers, such as, ambiguity, repetition, and emotional valence, exploiting these biases in diverse document types, including news articles and social media posts (Boler &amp; Nemorin, 2020).<\/p>\n<p>The integration of cognitive psychology and rhetorical theory remains rare, leaving a gap in understanding the rhetorical mechanisms behind bias activation (Altay, de Araujo &amp; Mercier, 2022).<\/p>\n<h2><strong>Proposed Theoretical Construct<\/strong><\/h2>\n<p>The present study is a mixed-methods analysis of 260 verified disinformation texts, correlating bias triggers with document types and introducing\u00a0a new construct called cross-source fusion\u00a0to bridge cognitive and rhetorical theories. This construct addresses a gap in a rarely integrated area to improve scholars\u2019 \u00a0understanding of how propagandists exploit cognitive biases in mediated communication.<\/p>\n<p>The concept was developed through a synthesis of peer-reviewed literature on source credibility, rhetorical framing, and disinformation tactics, revealing a pattern where propagandists combine trusted and fabricated sources to create persuasive hybrids, but without a formalized term or model to describe it (Entman, 1993; Fisher, 1984).\u00a0Cross-source fusion\u00a0refers to the deliberate blending of credible and misleading sources in propaganda texts to enhance a narrative&#8217;s plausibility. This process involves integrating references to trusted outlets or established facts with fabricated claims, creating a hybrid message that appears legitimate and resists scrutiny (Entman, 1993). For example, a disinformation text might cite a legitimate mainstream media report on a geopolitical event while fusing it with false assertions about hidden motives, exploiting the credible source to lend authority to the misleading elements.<\/p>\n<p>This construct explains how propagandists amplify persuasion by leveraging cognitive vulnerabilities, such as source confusion bias, in digital ecosystems where information fragments circulate rapidly (Fisher, 1984). By formalizing\u00a0cross-source fusion, the study bridges cognitive psychology&#8217;s focus on mental shortcuts with rhetorical theory&#8217;s emphasis on message design, offering a framework to analyze propaganda&#8217;s persistence and inform countermeasures (Jamieson &amp; Cappella, 2008).<\/p>\n<h2><strong>Research Questions<\/strong><\/h2>\n<p>The five research questions were developed through a synthesis of the study\u2019s purpose, the gaps identified in peer-reviewed literature, and the data available from 260 verified disinformation texts. They address how propagandists exploit cognitive biases through linguistic triggers, correlate these triggers with document types, examine synergistic bias interactions, and evaluate the novel construct of cross-source fusion (Entman, 1993; Scheufele &amp; Krause, 2019). These questions directly tackle deficiencies in current scholarship, such as fragmented analyses of biases and lack of cognitive-rhetorical integration, while leveraging the dataset to provide empirical insights (Brossard, 2013). By framing the questions as exploratory, they guide the mixed-methods analysis to advance communication scholarship and inform countermeasures, ensuring the study\u2019s publication value.<\/p>\n<p><strong><em>Research Question One: <\/em><\/strong><em>How do disinformation texts integrate cognitive bias triggers through linguistic mechanisms, such as ambiguity, repetition, and emotional valence?<\/em><\/p>\n<p><strong><em>Research Question Two:<\/em><\/strong> How do bias triggers correlate with document types, such as news articles and social media posts?<\/p>\n<p><strong><em>Research Question Three<\/em><\/strong><strong>: <\/strong>How do disinformation texts coordinate multiple cognitive bias triggers, such as repetition and familiarity, to construct layered persuasive appeals?<\/p>\n<p><strong><em>Research Question Four<\/em><\/strong><strong>:<\/strong> How does the construct of cross-source fusion enhance understanding of disinformation\u2019s rhetorical mechanisms in mediated communication?<\/p>\n<p><strong><em>Research Question Five<\/em><\/strong><strong>:<\/strong> How does cross-source fusion blending of credible and conspiratorial information relate to the density and intensity of cognitive bias triggers in disinformation narratives?<\/p>\n<h2><strong>Methods<\/strong><\/h2>\n<p>This study employs a mixed-methods design to explore how disinformation propaganda texts exploit cognitive biases through linguistic triggers. The design follows a sequential explanatory approach, beginning with deductive quantitative content analysis to systematically identify and measure cognitive biases (e.g., confirmation, repetition) and linguistic triggers (e.g., ambiguity, emotional valence) as variables in disinformation texts. This was followed by a qualitative thematic analysis to verify the validity of these findings and uncover nuanced rhetorical patterns missed by coding (Creswell &amp; Plano Clark, 2018). Starting with quantitative coding is justified because it provides a structured framework to test predefined variables derived from the literature, ensuring empirical rigor before qualitative exploration (Neuendorf, 2017). Although conventional wisdom may favor inductive qualitative analysis first, the deductive approach is prioritized here to establish a baseline for bias prevalence. Qualitative analysis then refines this by identifying contextual subtleties and rhetorical strategies (Tashakkori &amp; Teddlie, 2010). This approach enhances understanding of propaganda\u2019s persuasive mechanisms across diverse media formats. In this study,\u00a0<em>persuasive power<\/em>\u00a0refers to textual evidence of multiple cognitive bias triggers within a message\u2014measured as rhetorical density and structural complexity\u2014rather than observed changes in audience attitudes or behavior.<\/p>\n<h3><strong>Sample<\/strong><\/h3>\n<p>The study drew its dataset from the\u00a0EUvsDisinfo database, a project established by the\u00a0European External Action Service\u2019s East StratCom Task Force. EUvsDisinfo systematically archives, translates, and annotates examples of pro-Kremlin disinformation appearing in multiple languages across traditional and digital media. Each entry in the database links to the original article or broadcast and includes short analyst notes summarizing the false or misleading claims. Because these materials originate from verified state-aligned outlets such as\u00a0RT,\u00a0Sputnik, and affiliated regional portals, the archive provides a credible and consistently curated source of propaganda texts suitable for systematic study. Using EUvsDisinfo ensures both authenticity (state-linked messaging) and\u00a0comparability\u00a0(uniform collection standards and temporal coverage).<\/p>\n<p>A\u00a0purposive sampling\u00a0approach was employed to isolate materials most relevant to the study\u2019s theoretical focus on\u00a0source confusion. From the full database, articles were selected if they (a) contained identifiable textual narratives rather than video transcripts or social-media fragments. (b) addressed geopolitical themes involving the European Union, NATO, Ukraine, or the United States, (c) Exhibited rhetorical framing or attributional structures that could be meaningfully coded. This method aligns with typical qualitative content-analytic logic, prioritizing conceptual richness and analytical depth over population representativeness. The final corpus comprised\u00a0(N=200) articles\u00a0published between 2022 and 2025. This sample size balances breadth and manageability: it is large enough to support descriptive and correlational statistics while allowing detailed manual verification of automated codes. Prior methodological research in computational propaganda analysis indicates that 150\u2013250 documents generally provide sufficient variance to identify dominant rhetorical patterns and stabilize frequency estimates (e.g., Barber\u00e1 et al., 2021; Wilson &amp; Starbird, 2020). The resulting dataset therefore offers both analytic reliability and interpretive depth, appropriate for a mixed-methods design.<\/p>\n<h3><strong>Data Analysis<\/strong><\/h3>\n<p>This study analyzes two types of data: cognitive biases (source confusion, repetition, familiarity, confirmation, emotional contagion, availability heuristic, anchoring, bandwagon effect, halo effect, loss aversion) and linguistic triggers (ambiguity, repetition, emotional valence, narrative framing, social proof language, hyperbolic language, authority appeals). These are listed and defined in Table One below. The study examines\u00a0<em>textual indicators<\/em>\u00a0of cognitive bias activation\u2014linguistic and structural features consistent with bias triggers established in prior psychological research. It does not measure actual bias responses or persuasive effects among audiences.<\/p>\n<p>We use a sequential mixed-methods design to balance quantitative precision and qualitative depth. Deductive quantitative content analysis, supported by natural language processing (NLP) techniques. First, we measured the frequency of these variables in disinformation texts, coding cognitive biases through textual indicators, such as vague references for source confusion or consensus phrases for bandwagon effect. Next, we coded for linguistic triggers through rhetorical features, such as exaggerated language or authority invocations (Pennebaker et al., 2003). NLP tools, including keyword extraction and sentiment analysis, identify recurring patterns, such as repeated phrases or emotionally charged terms, with two independent coders verifying results to ensure high intercoder reliability (\u03ba \u2265 0.80) (Jurafsky &amp; Martin, 2019). This approach, justified by its ability to efficiently process large text corpora while detecting bias-related markers, may risk misinterpreting stylistic nuances (Hovy &amp; Lavid, 2010). The quantitative foundation enables systematic measurement of disinformation\u2019s persuasive mechanisms, setting the stage for subsequent qualitative analysis.<\/p>\n<p>Quantitatively, cognitive biases and linguistic triggers are coded on a 0\u20135 interval scale (0 = absent, 5 = highly prominent), with scores averaged to produce text-level intensity values for each variable. To ensure conservative interpretation, thresholds are applied post hoc: \u2265 3.5 indicates strong presence, 1.5\u20133.49 moderate, and &lt; 1.5 absent. Descriptive statistics and chi-square tests explore relationships between variable intensity and document types (e.g., news articles, social media posts), addressing research questions on bias correlations (Bauer &amp; Gaskell, 2008). Reliability testing on a 15% subsample (n=30 texts) targets Krippendorff\u2019s \u03b1 \u2265 .80, supported by two independent coders to ensure consistency across coding rounds (Weber &amp; Popova, 2012). Interval scale coding, justified by its ability to capture nuanced variations in bias and trigger prominence, enhances statistical analysis compared to binary coding, though it requires rigorous coder training to maintain reliability (Hayes, 2018).<\/p>\n<p>Following quantitative content analysis, qualitative thematic analysis is conducted on 25% (n=50) of the disinformation texts to verify quantitative findings and identify rhetorical nuances in cognitive biases and linguistic triggers. The thematic analysis follows Braun and Clarke\u2019s (2006) six-phase method: (1) familiarizing with data through repeated reading, (2) generating initial codes for rhetorical patterns (e.g., narrative structures, persuasive intent), (3) searching for themes across codes, (4) reviewing themes for coherence, (5) defining and naming themes, and (6) producing a report linking themes to biases and triggers (Braun &amp; Clarke, 2006). Two independent coders iteratively develop codes, guided by grounded theory principles to ensure emergent themes reflect textual subtleties (Corbin &amp; Strauss, 2015). Analyzing 50% of the texts is justified because it balances depth and feasibility, allowing detailed exploration of rhetorical nuances while maintaining manageability within the study\u2019s scope, as supported by prior mixed-methods research (Guest et al., 2012). This qualitative approach verifies the validity of quantitative results and captures contextual patterns, such as strategic source blending, that NLP may overlook.<\/p>\n<p>Cross-source fusion was defined as the rhetorical blending of credible institutional references (e.g., government agencies, international organizations, scientific reports) with conspiratorial or ideologically loaded claims within a single text. Coders identified fusion when credible and non-credible material appeared in close proximity or logical sequence, creating an implied relationship of verification. Each instance was rated on a 0\u20135 scale for integration density: 0 = none, 1\u20132 = isolated or partial blending, 3\u20134 = recurring hybrid sourcing, and 5 = continuous interweaving of credible and conspiratorial sources. This construct differs from cognitive bias variables by focusing on\u00a0<strong>structural composition<\/strong>\u00a0rather than psychological trigger type, measuring how credibility is rhetorically transferred rather than how biases are individually evoked.<\/p>\n<p><em><strong>Table 1: Cognitive Biases and Linguistic Triggers in Disinformation Texts<\/strong><\/em><\/p>\n<table>\n<tbody>\n<tr>\n<td colspan=\"4\" width=\"576\"><strong>Cognitive Biases<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Variable<\/strong><\/td>\n<td width=\"168\"><strong>Operational Definition<\/strong><\/td>\n<td width=\"204\"><strong>Hypothetical Example in Text<\/strong><\/td>\n<td width=\"113\"><strong>Citation<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Source Confusion Bias<\/strong><\/td>\n<td width=\"168\">Misattribution of information to incorrect or vague sources<\/td>\n<td width=\"204\">Citing \u201cunnamed experts\u201d in a news article to support a false claim<\/td>\n<td width=\"113\">Echterhoff, Higgins, &amp; Groll, (2005)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Repetition Bias<\/strong><\/td>\n<td width=\"168\">Increased perceived truth due to repeated exposure (illusory truth effect)<\/td>\n<td width=\"204\">Repeating a false bioweapons claim across social media posts<\/td>\n<td width=\"113\">Dech\u00eane, Stahl,\u00a0 Hansen, &amp; W\u00e4nke (2010)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Familiarity Bias<\/strong><\/td>\n<td width=\"168\">Preference for previously encountered information<\/td>\n<td width=\"204\">Referencing familiar historical events to embed disinformation<\/td>\n<td width=\"113\">Dech\u00eane, Stahl, \u00a0Hansen &amp; W\u00e4nke, (2010)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Confirmation Bias<\/strong><\/td>\n<td width=\"168\">Tendency to accept information aligning with pre-existing beliefs<\/td>\n<td width=\"204\">Anti-Western rhetoric in a blog post targeting distrustful audiences<\/td>\n<td width=\"113\">Kunda, (1990)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Emotional Contagion Bias<\/strong><\/td>\n<td width=\"168\">Spread of affective states (e.g., fear, anger) through emotionally charged communication<\/td>\n<td width=\"204\">Fear-inducing headlines about geopolitical threats in news articles<\/td>\n<td width=\"113\">Barsade, (2002)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Availability Heuristic<\/strong><\/td>\n<td width=\"168\">Overestimation of event likelihood due to vivid descriptions<\/td>\n<td width=\"204\">Sensational headlines exaggerating risks, e.g., \u201cimminent global threat\u201d<\/td>\n<td width=\"113\">Tversky &amp; Kahneman, (1973)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Anchoring Bias<\/strong><\/td>\n<td width=\"168\">Influence of initial extreme claims on subsequent judgments<\/td>\n<td width=\"204\">Inflated statistics in a news report, e.g., \u201cmillions affected by crisis\u201d<\/td>\n<td width=\"113\">Epley &amp; Gilovich, (2006)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Bandwagon Effect<\/strong><\/td>\n<td width=\"168\">Conformity to perceived widespread beliefs<\/td>\n<td width=\"204\">Phrases implying mass support, e.g., \u201ceveryone knows\u201d in social media posts<\/td>\n<td width=\"113\">Cialdini &amp; Goldstein, (2004)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Halo Effect<\/strong><\/td>\n<td width=\"168\">Generalization of traits based on one attribute<\/td>\n<td width=\"204\">Portraying a source as fully trustworthy based on one credible claim<\/td>\n<td width=\"113\">Nisbett, &amp; Wilson, (1977)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Loss Aversion Bias<\/strong><\/td>\n<td width=\"168\">Motivation to act due to emphasis on potential losses<\/td>\n<td width=\"204\">Language highlighting threats, e.g., \u201csecurity at risk\u201d in articles<\/td>\n<td width=\"113\">Kahneman &amp; Tversky,(1979)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<table>\n<tbody>\n<tr>\n<td colspan=\"4\" width=\"576\"><strong>Linguistic Triggers<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Variable<\/strong><\/td>\n<td width=\"168\"><strong>Operational Definition<\/strong><\/td>\n<td width=\"204\"><strong>Hypothetical Example in Text<\/strong><\/td>\n<td width=\"113\"><strong>Citation<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Ambiguity<\/strong><\/td>\n<td width=\"168\">Use of vague or unclear language to obscure meaning or source<\/td>\n<td width=\"204\">Ambiguous phrases like \u201creports suggest\u201d without specific attribution<\/td>\n<td width=\"113\">Slatcher, Chung, Pennebaker &amp; Stone. (2007)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Repetition<\/strong><\/td>\n<td width=\"168\">Repeated use of specific phrases or claims within or across texts<\/td>\n<td width=\"204\">Recurring slogans in social media posts to reinforce a narrative<\/td>\n<td width=\"113\">Dech\u00eane, Stahl, Hansen &amp; W\u00e4nke,(2010)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Emotional Valence<\/strong><\/td>\n<td width=\"168\">Use of emotionally charged language to evoke affective responses<\/td>\n<td width=\"204\">Alarmist language in a tweet, e.g., \u201ccatastrophic threat imminent\u201d<\/td>\n<td width=\"113\">Barsade, (2002)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Narrative Framing<\/strong><\/td>\n<td width=\"168\">Structured narratives emphasizing specific perspectives<\/td>\n<td width=\"204\">Victimhood narratives in news articles, e.g., \u201cnation under attack\u201d<\/td>\n<td width=\"113\">Entman,(1993)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Social Proof Language<\/strong><\/td>\n<td width=\"168\">Language implying widespread support or consensus<\/td>\n<td width=\"204\">Phrases like \u201cmany believe\u201d in social media posts<\/td>\n<td width=\"113\">Cialdini &amp; Goldstein, (2004)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Hyperbolic Language<\/strong><\/td>\n<td width=\"168\">Exaggerated language to amplify urgency or impact<\/td>\n<td width=\"204\">Extreme terms like \u201cunprecedented crisis\u201d in headlines<\/td>\n<td width=\"113\">Tannenbaum et. al, (2015)<\/td>\n<\/tr>\n<tr>\n<td width=\"91\"><strong>Authority Appeals<\/strong><\/td>\n<td width=\"168\">Invoking authoritative figures or institutions to lend credibility<\/td>\n<td width=\"204\">Citing \u201cscientists confirm\u201d in a news report<\/td>\n<td width=\"113\">Hovland &amp; Weiss, (1951)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<h2><strong>Findings<\/strong><\/h2>\n<p>Using a conservative threshold of\u00a0\u2265 4\u00a0to mark a strong bias, analysis shows that\u00a0around (61%)\u00a0of the 260 disinformation texts contained at least one knownRoughly a third displayed multiple strong biases at once, indicating a multitude of psychological cues. The most common high-intensity patterns were\u00a0confirmation bias\u00a0and affective provocation.\u00a0Source confusion\u00a0(38%) and\u00a0repetition bias\u00a0(36%) followed closely, appearing mainly in articles that mimicked journalistic form while repeating unverified claims to signal credibility. Less frequent were\u00a0familiarity bias,\u00a0availability heuristic, and\u00a0anchoring bias, each below (25%), suggesting they may be be a supporting rather than dominant mechanisms. Overall, strong bias activation appeared\u00a0selective and purposeful.<\/p>\n<h2><strong>Research Questions<\/strong><\/h2>\n<h3><strong><em>Research Question One<\/em><\/strong><\/h3>\n<p>RQ 1 asked: How do disinformation texts integrate cognitive bias triggers through linguistic mechanisms, such as ambiguity, repetition, and emotional valence?\u00a0 Quantitative analysis revealed clear relationships between cognitive bias intensity and corresponding linguistic triggers\u00a0(see Figure 1).\u00a0A one-way analysis of variance showed that\u00a0repetition bias\u00a0differed significantly across texts grouped by\u00a0linguistic repetition,\u00a0F(2, 257) = 16.42,\u00a0p\u00a0&lt; .001, \u03b7\u00b2 = .11, indicating that repeated phrases and slogans were a key driver of perceived truth.\u00a0Emotional contagion bias\u00a0correlated strongly with\u00a0negative emotional valence,\u00a0r(258) = .68,\u00a0p\u00a0&lt; .001, confirming that anger- and fear-laden language reliably amplified persuasion.\u00a0Source confusion bias displayed a moderate relationship with\u00a0ambiguity\u00a0in attributional phrasing,\u00a0r(258) = .47,\u00a0p\u00a0&lt; .001. Together, these findings suggest that repetition, emotional tone, and vague sourcing operate as the primary linguistic mechanisms through which disinformation activates cognitive biases, enhancing both credibility and emotional engagement. As shown in Figure 1, the strongest correlations occur between repetition bias and linguistic repetition, emotional contagion bias and negative valence, and source confusion bias and ambiguity.\u00a0The pattern visualized in the heatmap underscores that these pairings dominate the rhetorical landscape of the dataset, illustrating how linguistic structure and cognitive vulnerability converge to sustain persuasive impact in digital propaganda.<\/p>\n<p><em><strong>Figure 1: Correlations between Linguistic Triggers and Cognitive Biases<\/strong><\/em><\/p>\n<p><a href=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-1-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-2656 size-large aligncenter\" src=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-1-1-1024x587.png\" alt=\"\" width=\"1024\" height=\"587\" srcset=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-1-1-1024x587.png 1024w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-1-1-300x172.png 300w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-1-1-768x440.png 768w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-1-1.png 1456w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/p>\n<p>A representative instance appeared in a 2023 pro-Kremlin article alleging the existence of secret Western biolabs in Ukraine. The text claimed,\u00a0<em>\u201cReports suggest that U.S. scientists left behind containers of unknown substances, and many experts now believe these were part of a biological weapons program.\u201d<\/em>\u00a0This sentence combines several linguistic cues that activate cognitive biases. Ambiguous attribution through phrases such as\u00a0<em>\u201creports suggest\u201d<\/em>\u00a0and\u00a0<em>\u201cmany experts <\/em>\u201cintroduces uncertainty while implying authority. The phrase\u00a0<em>\u201cbiological weapons program\u201d<\/em>\u00a0injects emotional threat framing that intensifies fear and moral outrage. Repetition of this core claim across multiple articles further reinforces its perceived plausibility. Together, these features blur evidentiary boundaries and amplify affective engagement, illustrating how linguistic triggers operationalize cognitive biases to heighten persuasive impact in disinformation narratives.<\/p>\n<h3><strong><em>Research Question Two<\/em><\/strong><\/h3>\n<p>RQ 2 posited: How do disinformation texts exploit cognitive biases through linguistic triggers, such as ambiguity, repetition, and emotional valence?<\/p>\n<p>Analysis revealed meaningful variation in cognitive bias intensity across document formats. A multivariate analysis of variance (MANOVA) comparing\u00a0news articles,\u00a0commentaries, and\u00a0social media reposts\u00a0showed a significant overall effect of document type on aggregated bias scores,\u00a0<em>Wilks\u2019 \u039b<\/em>\u00a0= .83,\u00a0<em>F<\/em>(14, 504) = 2.67,\u00a0<em>p<\/em>\u00a0&lt; .001, \u03b7\u00b2 = .09. Follow-up univariate tests indicated that\u00a0source confusion bias\u00a0and\u00a0confirmation bias\u00a0were highest in news-style pieces,\u00a0<em>F<\/em>(2, 257) = 6.21,\u00a0<em>p<\/em>\u00a0= .002, \u03b7\u00b2 = .05, reflecting an effort to mimic credible journalism while promoting ideologically aligned narratives. In contrast,\u00a0emotional contagion bias\u00a0and\u00a0repetition bias\u00a0were strongest in social-media posts,\u00a0<em>F<\/em>(2, 257) = 8.73,\u00a0<em>p<\/em>\u00a0&lt; .001, \u03b7\u00b2 = .06, consistent with the affective and viral design of short-form online content. Commentaries and op-ed formats displayed intermediate values, often combining interpretive framing with selective emotional emphasis.<\/p>\n<p>A typical contrast appeared between a long-form article and its derivative social-media repost. The original piece stated,\u00a0<em>\u201cAccording to official documents, Western advisers continue to manipulate Ukraine\u2019s decision-making while denying any involvement.\u201d<\/em>\u00a0The repost condensed this to,\u00a0<em>\u201cThe West controls Ukraine\u2014everyone sees it now.\u201d<\/em>\u00a0The first example demonstrates\u00a0source confusion\u00a0through the vague invocation of \u201cofficial documents,\u201d lending journalistic credibility to an unverified claim. The second intensifies\u00a0emotional contagion\u00a0and\u00a0repetition, transforming cautious attribution into an accusatory slogan optimized for sharing. Together, the pair illustrates how message format alters the dominant bias mechanism: credibility through ambiguity in news contexts, and emotional resonance through simplification on social media platforms.<\/p>\n<h3><strong><em>Research Question Three<\/em><\/strong><\/h3>\n<p>RQ 3 stated: How do disinformation texts coordinate multiple cognitive bias triggers, such as repetition and familiarity, to construct layered persuasive appeals? A correlation and component analysis revealed that several cognitive biases cluster together, creating reinforcing patterns that magnify persuasive impact. A principal component analysis (PCA) identified two dominant dimensions explaining\u00a064 percent\u00a0of the total variance. The first, labeled\u00a0Affective-Reinforcement Bias, combined\u00a0emotional contagion,\u00a0repetition, and\u00a0familiarity\u00a0(loadings = .82, .71, .76), indicating that emotionally charged repetition is highly correlated with message familiarity and belief persistence. The second,\u00a0Credibility-Simulation Bias, linked\u00a0source confusion,\u00a0confirmation, and anchoring (loadings = .79, .74, .69), reflecting attempts to mimic credible reporting through vague attribution and confident assertion. Regression results predicting overall co-occurrence of bias clusters were significant,\u00a0<em>F<\/em>(2, 257) = 21.53,\u00a0<em>p<\/em>\u00a0&lt; .001,\u00a0<em>R<\/em>\u00b2 = .29, confirming that texts containing multiple high-intensity biases were more persuasive than those with isolated bias activations.<\/p>\n<p>Figure 2 visualizes these inter-bias relationships.\u00a0Red nodes represent the affective-reinforcement cluster and blue nodes depict the credibility-simulation cluster. Numbers within each node show\u00a0factor loadings, indicating how strongly each bias contributes to its cluster. Lines connecting the nodes show\u00a0correlation coefficients (r); thicker, darker lines denote stronger associations. Reading the figure from left to right, the dense red triad illustrates how repetition, emotional contagion, and familiarity might reinforce one another, while the blue cluster highlights how confirmation and source confusion jointly simulate credibility.<\/p>\n<p><em><strong>Figure 2. Relationships Among Cognitive Biases<\/strong><\/em><\/p>\n<p><a href=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-2-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2657 size-full\" src=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-2-1.png\" alt=\"\" width=\"807\" height=\"440\" srcset=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-2-1.png 807w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-2-1-300x164.png 300w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-2-1-768x419.png 768w\" sizes=\"auto, (max-width: 807px) 100vw, 807px\" \/><\/a><\/p>\n<p>A clear example of bias synergy appeared in a widely circulated 2023 commentary about European sanctions on Russia. The article declared,\u00a0<em>\u201cEuropean leaders admit that sanctions are destroying their own economies, yet they continue under pressure from Washington.\u201d<\/em>\u00a0This brief passage combines\u00a0triggers for confirmation bias,\u00a0repetition, and\u00a0emotional contagion\u00a0in a single frame. The claim would appeal to existing beliefs about Western hypocrisy (confirmation), reiterates an unverified economic collapse narrative (repetition), and provokes anger through emotionally charged phrasing (\u201cdestroying their own economies\u201d). By merging these elements, the text converts ideological alignment into emotional momentum\u2014readers are likely to feel rather than evaluate the claim. This interaction exemplifies the pattern visualized in Figure 2, where emotionally reinforcing and credibility-simulating biases co-occur to strengthen the shareability and spreadability of disinformation.<\/p>\n<h3><strong><em>Research Question Four<\/em><\/strong><\/h3>\n<p>RQ 4 aske: How do synergistic interactions among biases, like repetition reinforcing familiarity, amplify disinformation\u2019s persuasive impact? Regression analysis demonstrated that the four structural\u2013rhetorical variables significantly predicted persuasive intensity,\u00a0<em>F<\/em>(4, 255) = 19.84,\u00a0<em>p<\/em>\u00a0&lt; .001,\u00a0<em>R\u00b2<\/em>\u00a0= .24. Among these predictors,\u00a0cross-source fusion\u00a0showed the strongest association with overall bias density (<em>\u03b2<\/em>\u00a0= .63,\u00a0<em>p<\/em>\u00a0&lt; .001), indicating that mixing credible and conspiratorial sources substantially increases cognitive bias activation.\u00a0Narrative complexity\u00a0also contributed meaningfully (<em>\u03b2<\/em>\u00a0= .48,\u00a0<em>p<\/em>\u00a0&lt; .001), suggesting that multi-actor story structures enhance ideological alignment. Both\u00a0moral appeal\u00a0(<em>\u03b2<\/em>\u00a0= .42,\u00a0<em>p<\/em>\u00a0&lt; .001) and\u00a0threat level\u00a0(<em>\u03b2<\/em>\u00a0= .39,\u00a0<em>p<\/em>\u00a0&lt; .001) emerged as significant emotional amplifiers.<\/p>\n<p>Texts with the highest concentration of psychological cues were not randomly distributed but clustered around a small set of recurring narratives. These narratives typically framed international politics as a\u00a0<strong>moral struggle<\/strong>\u00a0between corruption and virtue, often targeting Western governments, NATO, or global elites. Emotional tone was consistently negative, marked by\u00a0<strong>fear, moral outrage, and betrayal<\/strong>. Many presented Russia as the last bastion of cultural or moral integrity under siege by external forces. This concentration of cues suggests that bias activation was most intense in stories that personalized geopolitical conflict as existential threat or ethical collapse\u2014narrative forms that naturally invite repetition, emotional contagion, and confirmation bias.<\/p>\n<p>Figure 3\u00a0visualizes these results, with bars representing mean intensity (0\u20135 scale) and the red line showing standardized regression coefficients (<em>\u03b2<\/em>). The pattern highlights that while all four rhetorical features contribute to persuasion,\u00a0moral and threat-based framing exert the strongest combined influence, reinforcing the role of ethical justification and existential fear as core mechanisms in disinformation\u2019s persuasive design.<\/p>\n<p><em><strong>Figure 3. Structural-Rhetorical Predictors of Persuasive Intensity (N=260)<\/strong><\/em><\/p>\n<p><a href=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-3.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2651\" src=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-3.png\" alt=\"\" width=\"574\" height=\"403\" srcset=\"https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-3.png 781w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-3-300x211.png 300w, https:\/\/mprcenter.org\/review\/wp-content\/uploads\/2025\/10\/Figure-3-768x540.png 768w\" sizes=\"auto, (max-width: 574px) 100vw, 574px\" \/><\/a><\/p>\n<p><strong>\u00a0<\/strong>A representative example of this dynamic appeared in a 2023 commentary titled\u00a0<em>\u201cThe West Has Declared War on Morality.\u201d<\/em>\u00a0The article warned,\u00a0<em>\u201cIf traditional values fall, civilization itself will collapse\u2014Russia alone stands to protect humanity from this decay.\u201d<\/em>\u00a0The passage fuses\u00a0moral appeal\u00a0and\u00a0existential threat framing\u00a0to construct a sense of sacred duty and imminent danger. The appeal to \u201cprotect humanity\u201d provides moral legitimacy, while the prediction of civilizational collapse transforms ideological disagreement into an urgent survival narrative. This rhetorical pairing converts abstract cultural conflict into a moral crisis, compelling emotional alignment rather than factual evaluation. It exemplifies the pattern shown in Figure 3, where high moral appeal and threat intensity co-occur with elevated cognitive bias density, producing the strongest overall persuasive effect in the dataset.<\/p>\n<h3><strong><em>Research Question Five<\/em><\/strong><\/h3>\n<p>Finally, RQ5 states: How does cross-source fusion blending of credible and conspiratorial information relate to the density and intensity of cognitive bias triggers in disinformation narratives? Quantitative analysis showed that\u00a0cross-source fusion\u00a0was the single strongest predictor of overall persuasive intensity,\u00a0<em>\u03b2<\/em>= .63,\u00a0<em>p<\/em>\u00a0&lt; .001, explaining a substantial portion of variance in bias activation even when controlling for other structural\u2013rhetorical variables,\u00a0<em>F<\/em>(1, 258) = 39.22,\u00a0<em>p<\/em>\u00a0&lt; .001,\u00a0<em>R\u00b2<\/em>\u00a0= .16. Texts that exhibited frequent alternation between factual and conspiratorial sources demonstrated higher mean scores across\u00a0source confusion,\u00a0confirmation, and\u00a0anchoring biases(all\u00a0<em>r<\/em>\u00a0&gt; .55,\u00a0<em>p<\/em>\u00a0&lt; .001). These results confirm that hybrid sourcing is not an incidental rhetorical choice but a deliberate persuasive strategy designed to transfer credibility from legitimate references to false or ideologically charged claims. As visualized in\u00a0Figure 3, cross-source fusion stands at the intersection of structural and cognitive mechanisms, functioning as the communicative bridge that activates belief while masking manipulation.<\/p>\n<p>A striking example appeared in a 2023 article titled\u00a0<em>\u201cNATO Documents Reveal Secret Biolab Network.\u201d<\/em>\u00a0The text opens with an authentic citation to a World Health Organization report, followed immediately by an unverified claim that\u00a0<em>\u201cU.S. military contractors operated these labs in violation of international law.\u201d<\/em>\u00a0This juxtaposition of verifiable and fabricated material creates an evidentiary illusion: the credible source legitimizes the conspiratorial extension. The passage embodies cross-source fusion\u2019s dual effect\u2014simulating transparency while fostering deception. By merging institutional authority with ideological narrative, it produces a persuasive coherence that is cognitively difficult to disentangle, illustrating how hybrid sourcing strengthens disinformation\u2019s perceived authenticity and resilience against correction.<\/p>\n<h2><strong>Discussion<\/strong><\/h2>\n<p>The findings of this study indicate that disinformation achieves persuasive power through a coordinated system of linguistic, emotional, and structural mechanisms. Across the dataset, cognitive biases such as\u00a0confirmation,\u00a0emotional contagion,\u00a0repetition, and\u00a0source confusion\u00a0were consistently used through patterned rhetorical triggers rather than incidental wording. Linguistic repetition reinforced familiarity and perceived truth; emotionally charged phrasing amplified affective resonance; and vague attribution blurred evidentiary boundaries. Structural devices\u2014particularly\u00a0moral appeal\u00a0and\u00a0threat framing\u2014further magnified these effects, converting ideological claims into moral imperatives and crises of survival. Together, these mechanisms suggest a coherent architecture of persuasion in which emotion, credibility cues, and moral urgency are synchronized to heighten message impact.<\/p>\n<p>While the analysis cannot empirically confirm that Russian propagandists\u00a0consciously engineered\u00a0each message to exploit specific cognitive biases, the consistency and sophistication of these rhetorical patterns make intentional design highly probable. The recurrence of bias-triggering structures across hundreds of independently attributed texts implies shared production norms or training in persuasive technique rather than coincidence. Such uniformity suggests the presence of an underlying communicative doctrine\u2014one that treats psychological susceptibility not as an aftereffect of propaganda, but as a\u00a0design objective. In this sense, the study does not claim to prove intentional manipulation, but it demonstrates a level of rhetorical regularity that strongly supports the plausibility of coordinated bias activation as a core feature of Russian disinformation practice.<\/p>\n<h3><strong>Theoretical Implications<\/strong><\/h3>\n<p>The results of this study extend current models of propaganda and disinformation by clarifying how cognitive, emotional, and structural elements are integrated within a single persuasive system. Traditional approaches often treat misinformation as either a\u00a0psychological phenomenon\u2014in which audiences fall prey to biases\u2014or a\u00a0discursive one, emphasizing narrative and framing. The present analysis shows that these dimensions are inseparable. Rhetorical form supplies the structure through which cognitive biases can be activated, while biases, in turn, can shape how audiences process and reproduce rhetoric. This reciprocal relationship suggests that propaganda functions less as message transmission than as\u00a0cognitive engineering, aligning linguistic and emotional cues with predictable mental shortcuts to maximize acceptance and minimize scrutiny.<\/p>\n<p>Within this system,\u00a0cross-source fusion\u00a0emerges as the theoretical linchpin connecting cognitive bias theory to rhetorical practice. By interweaving credible institutional references\u2014government agencies, international organizations, scientific reports\u2014with conspiratorial or ideologically charged claims, propagandists construct a\u00a0hybrid evidentiary field. This fusion transfers perceived legitimacy from authentic sources to fabricated narratives, effectively collapsing the distinction between verification and assertion. Psychologically, it engages\u00a0source confusion,\u00a0anchoring, and\u00a0confirmation biases simultaneously: audiences recognize the credible cue, anchor on its authority, and then assimilate the adjoining falsehood as consistent information. Rhetorically, it simulates transparency and balance, adopting the surface conventions of journalism while subverting its epistemic norms.<\/p>\n<p>Theoretically, cross-source fusion represents more than a tactic; it is a\u00a0structural logic of persuasion\u00a0suited to the information environment of networked media. It exploits the abundance of credible data online to hide disinformation within plausible informational ecosystems. This mechanism explains how propaganda can appear both familiar and authoritative without overt deception, and why traditional fact-checking often fails to dislodge belief once credibility has been transferred. Future research should develop cross-source fusion as a formal construct\u2014quantifying the density, sequence, and relational structure of blended citations\u2014to determine how it mediates between cognitive susceptibility and rhetorical design. In doing so, communication theory can move beyond models that isolate bias or narrative and toward a unified framework of\u00a0strategic cognitive rhetoric, in which credibility itself becomes a manipulable resource within modern propaganda.<\/p>\n<h3><strong>Practical Implications<\/strong><\/h3>\n<p>These findings have practical value for analysts, communicators, and policymakers working to identify and counter disinformation. The evidence shows that persuasive power does not depend on new lies but on the\u00a0strategic reuse of familiar forms\u2014credible sources, emotional language, and moral framing. Countermeasures must therefore focus less on individual claims and more on\u00a0recognizing structural patterns\u00a0that repeatedly activate bias. First, monitoring systems should flag texts that mix verified institutional citations with speculative or conspiratorial conclusions. Such blending often signals\u00a0cross-source fusion, the strongest predictor of persuasive elements in this study. Second, analysts should track\u00a0emotional tone and repetition, as these are reliable indicators of content designed to bypass reasoning. Third, communication training for diplomats, journalists, and military personnel should include instruction on\u00a0bias recognition, teaching how ambiguity and moral urgency can manipulate judgment even when facts appear accurate. Finally, public resilience efforts should emphasize\u00a0cognitive awareness\u00a0rather than simple refutation. Helping audiences understand\u00a0<em>how<\/em>\u00a0disinformation feels convincing\u2014why repetition, fear, and authority cues work\u2014can reduce susceptibility across platforms. The goal is not only to expose falsehoods but to\u00a0interrupt the psychological mechanisms\u00a0that make them persuasive.<\/p>\n<h2><strong>Conclusion<\/strong><\/h2>\n<p>This study shows that disinformation is constructed through a\u00a0coordinated system of cognitive, emotional, and structural mechanisms\u00a0that together create persuasive coherence. Across hundreds of texts, linguistic repetition, emotional tone, moral appeal, and ambiguous sourcing aligned in consistent patterns that are likely to amplify credibility and belief. These recurring forms suggest design rather than coincidence. While the evidence cannot prove that Russian propagandists intentionally crafted content to trigger specific cognitive biases, the regularity and sophistication of these structures make intentional coordination highly probable. Even if bias activation arises indirectly\u2014through habit, imitation, or institutional style\u2014the result is the same: audiences experience information that feels credible, urgent, and self-confirming. This consistency underscores the value of identifying\u00a0how persuasion works, not only who intends it. By linking rhetorical form to predictable psychological effects, the study offers a method for analyzing disinformation as a process rather than a set of claims.<\/p>\n<p>Theoretically, the concept of\u00a0cross-source fusion\u00a0explains how legitimacy can be simulated without overt deception. Practically, it provides a lens for early detection and counterstrategy, showing that bias activation can be recognized in structure before it spreads through content. Whether intentional or emergent, these mechanisms reveal how modern propaganda sustains its influence in open information environments. Understanding these dynamics moves the study of disinformation beyond exposure toward the\u00a0systematic dismantling of its persuasive architecture.<\/p>\n<h2><strong>References<\/strong><\/h2>\n<p>Altay, S., de Araujo, E., &amp; Mercier, H. (2022). The disaster of misinformation: A review of research in social media.\u00a0<em>Current Opinion in Psychology, 45<\/em>, Article 101317.\u00a0<a href=\"https:\/\/doi.org\/10.1016\/j.copsyc.2022.101317\">https:\/\/doi.org\/10.1016\/j.copsyc.2022.101317<\/a><\/p>\n<p>Aral, S., &amp; Zhao, M. (2019). Social media sharing and the spread of misinformation. Management Science, 65(12), 5573\u20135587. https:\/\/doi.org\/10.1287\/mnsc.2019.3321<\/p>\n<p>Barber\u00e1, P., Boydstun, A. E., Linn, S., McMahon, R., &amp; Nagler, J. (2021).\u00a0<em>Automated text classification of news articles: A practical guide<\/em>. Political Analysis, 29(1), 19\u201342.\u00a0https:\/\/doi.org\/10.1017\/pan.2020.8<\/p>\n<p>Barsade, S. G. (2002). The ripple effect: Emotional contagion and its influence on group behavior. Administrative Science Quarterly, 47(4), 644\u2013675. https:\/\/doi.org\/10.2307\/3094912<\/p>\n<p>Brashier, N. M., &amp; Marsh, E. J. (2020). Judging truth.\u00a0<em>Annual Review of Psychology, 71<\/em>, 499\u2013515. <a href=\"https:\/\/doi.org\/10.1146\/annurev-psych-010419-050807\">https:\/\/doi.org\/10.1146\/annurev-psych-010419-050807<\/a><\/p>\n<p>Braun, V., &amp; Clarke, V. (2006). Using thematic analysis in psychology.\u00a0<em>Qualitative Research in Psychology, 3<\/em>(2), 77\u2013101. https:\/\/doi.org\/10.1191\/1478088706qp063oa<\/p>\n<p>Bri\u00f1ol, P., &amp; Petty, R. E. (2009). Persuasion: Insights from the self-validation hypothesis.\u00a0<em>Advances in Experimental Social Psychology, 41<\/em>, 69\u2013118. https:\/\/doi.org\/10.1016\/S0065-2601(08)00402-4<\/p>\n<p>Benkler, Y., Faris, R., Roberts, H., &amp; Zuckerman, E. (2018).\u00a0<em>Network propaganda: Manipulation, disinformation, and radicalization in American politics<\/em>. Oxford University Press. https:\/\/doi.org\/10.1093\/oso\/9780190923624.001.0001<\/p>\n<p>Boler, M., &amp; Nemorin, S. (2020). Propaganda and persuasion in the digital age: A critical introduction.\u00a0<em>Routledge<\/em>. https:\/\/doi.org\/10.4324\/9781003052272<\/p>\n<p>Cialdini, R. B., &amp; Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55, 591\u2013621. https:\/\/doi.org\/10.1146\/annurev.psych.55.090902.142015<\/p>\n<p>Creswell, J. W., &amp; Plano Clark, V. L. (2018).\u00a0<em>Designing and conducting mixed methods research<\/em>\u00a0(3rd ed.). Sage.<\/p>\n<p>Corbin, J., &amp; Strauss, A. (2015).\u00a0<em>Basics of qualitative research: Techniques and procedures for developing grounded theory<\/em>\u00a0(4th ed.). Sage.<\/p>\n<p>Chen, S., &amp; Chaiken, S. (1999). The heuristic-systematic model in its broader context. In S. Chaiken &amp; Y. Trope (Eds.),\u00a0<em>Dual-process theories in social psychology<\/em>\u00a0(pp. 73\u201396). Guilford Press.<\/p>\n<p>Dech\u00eane, A., Stahl, C., Hansen, J., &amp; W\u00e4nke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect. Personality and Social Psychology Review, 14(2), 238\u2013257. https:\/\/doi.org\/10.1177\/1088868309352251<\/p>\n<p>Echterhoff, G., Higgins, E. T., &amp; Groll, S. (2005). Audience-tuning effects on memory: The role of shared reality. Journal of Personality and Social Psychology, 89(3), 257\u2013276. https:\/\/doi.org\/10.1037\/0022-3514.89.3.257<\/p>\n<p>Ecker, U. K. H., Lewandowsky, S., &amp; Tang, D. T. W. (2014). Explicit warnings reduce but do not eliminate the continued influence of misinformation.\u00a0<em>Memory &amp; Cognition, 38<\/em>(8), 1087\u20131100. <a href=\"https:\/\/doi.org\/10.3758\/MC.38.8.1087\">https:\/\/doi.org\/10.3758\/MC.38.8.1087<\/a><\/p>\n<p>Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm.\u00a0<em>Journal of Communication, 43<\/em>(4), 51\u201358. https:\/\/doi.org\/10.1111\/j.1460-2466.1993.tb01304.x<\/p>\n<p>Epley, N., &amp; Gilovich, T. (2006). The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological Science, 17(4), 311\u2013318. https:\/\/doi.org\/10.1111\/j.1467-9280.2006.01704.x<\/p>\n<p>Fisher, W. R. (1984). Narration as a human communication paradigm: The case of public moral argument.\u00a0<em>Communication Monographs, 51<\/em>(1), 1\u201322. https:\/\/doi.org\/10.1080\/03637758409390180<\/p>\n<p>Flynn, D. J., Nyhan, B., &amp; Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics.\u00a0<em>Political Psychology, 38<\/em>(S1), 127\u2013150. https:\/\/doi.org\/10.1111\/pops.12394<\/p>\n<p>Guest, G., MacQueen, K. M., &amp; Namey, E. E. (2012).\u00a0<em>Applied thematic analysis<\/em>. Sage.<\/p>\n<p>Gigerenzer, G. (2008). Why heuristics work.\u00a0<em>Perspectives on Psychological Science, 3<\/em>(1), 20\u201329. https:\/\/doi.org\/10.1111\/j.1745-6916.2008.00058.x<\/p>\n<p>Hovland, C. I., Janis, I. L., &amp; Kelley, H. H. (1953). Communication and persuasion: Psychological studies of opinion change. Yale University Press.<\/p>\n<p>Hovy, E., &amp; Lavid, J. (2010). Towards a \u2018science\u2019 of text mining: Concepts and applications.\u00a0<em>Procesamiento del Lenguaje Natural, 45<\/em>, 11\u201322.<\/p>\n<p>Jamieson, K. H., &amp; Cappella, J. N. (2008).\u00a0<em>Echo chamber: Rush Limbaugh and the conservative media establishment<\/em>. Oxford University Press. <a href=\"https:\/\/doi.org\/10.1093\/acprof:oso\/9780195366822.001.0001\">https:\/\/doi.org\/10.1093\/acprof:oso\/9780195366822.001.0001<\/a><\/p>\n<p>Khaldarova, I., &amp; Pantti, M. (2016). Fake news: The narrative battle over the Ukrainian conflict.\u00a0<em>Journalism Practice, 10<\/em>(7), 891\u2013901. <a href=\"https:\/\/doi.org\/10.1080\/17512786.2016.1163237\">https:\/\/doi.org\/10.1080\/17512786.2016.1163237<\/a><\/p>\n<p>Kahneman, D. (2011).\u00a0<em>Thinking, fast and slow<\/em>. Farrar, Straus and Giroux.<\/p>\n<p>Kahneman, D., &amp; Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263\u2013292. https:\/\/doi.org\/10.2307\/1914185<\/p>\n<p>Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480\u2013498. https:\/\/doi.org\/10.1037\/0033-2909.108.3.480<\/p>\n<p>Lewandowsky, S., Cook, J., Ecker, U. K. H., &amp; Albarracin, D. (2022). Misinformation: Susceptibility, spread, and interventions to immunize individuals and society.\u00a0<em>Nature Medicine, 28<\/em>(3), 460\u2013467.\u00a0<a href=\"https:\/\/doi.org\/10.1038\/s41591-022-01713-6\">https:\/\/doi.org\/10.1038\/s41591-022-01713-6<\/a><\/p>\n<p>Lewandowsky, S., Ecker, U. K. H., &amp; Cook, J. (2017). Beyond misinformation: Understanding and coping with the post-truth era.\u00a0<em>Journal of Applied Research in Memory and Cognition, 6<\/em>(4), 353\u2013369.\u00a0<a href=\"https:\/\/doi.org\/10.1016\/j.jarmac.2017.07.008\">https:\/\/doi.org\/10.1016\/j.jarmac.2017.07.008<\/a><\/p>\n<p>Marcus, G. E., Neuman, W. R., &amp; MacKuen, M. (2000).\u00a0<em>Affective intelligence and political judgment<\/em>. University of Chicago Press.<\/p>\n<p>Marwick, A., &amp; Lewis, R. (2017). Media manipulation and disinformation online. Data &amp; Society Research Institute. https:\/\/datasociety.net\/pubs\/oh\/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf<\/p>\n<p>&nbsp;<\/p>\n<p>Matz, S. C., Kosinski, M., Nave, G., &amp; Stillwell, D. J. (2017). Psychological targeting as an effective approach to digital mass persuasion.\u00a0<em>Proceedings of the National Academy of Sciences, 114<\/em>(48), 12714\u201312719. <a href=\"https:\/\/doi.org\/10.1073\/pnas.1710966114\">https:\/\/doi.org\/10.1073\/pnas.1710966114<\/a><\/p>\n<p><em>National Academy of Sciences, 110<\/em>(Supplement 3), 14096\u201314101. <a href=\"https:\/\/doi.org\/10.1073\/pnas.1212744110\">https:\/\/doi.org\/10.1073\/pnas.1212744110<\/a><\/p>\n<p>Neuendorf, K. A. (2017).\u00a0<em>The content analysis guidebook<\/em>\u00a0(2nd ed.). Sage.Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm.\u00a0<em>Journal of Communication, 43<\/em>(4), 51\u201358. https:\/\/doi.org\/10.1111\/j.1460-2466.1993.tb01304.x<\/p>\n<p>Jurafsky, D., &amp; Martin, J. H. (2019).\u00a0<em>Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition<\/em>\u00a0(3rd ed. draft). Stanford University.<\/p>\n<p>Nisbett, R. E., &amp; Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration of judgments. Journal of Personality and Social Psychology, 35(4), 250\u2013256. <a href=\"https:\/\/doi.org\/10.1037\/0022-3514.35.4.250\">https:\/\/doi.org\/10.1037\/0022-3514.35.4.250<\/a><\/p>\n<p>O\u2019Keefe, D. J. (2002).\u00a0<em>Persuasion: Theory and research<\/em>\u00a0(2nd ed.). Sage.<\/p>\n<p>Petty, R. E., Bri\u00f1ol, P., &amp; Tormala, Z. L. (2007). Thought confidence as a determinant of persuasion: The self-validation hypothesis.\u00a0<em>Journal of Personality and Social Psychology, 82<\/em>(5), 722\u2013741. https:\/\/doi.org\/10.1037\/0022-3514.82.5.722<\/p>\n<p>Petty, R. E., &amp; Wegener, D. T. (1998). Attitude change: Multiple roles for persuasion variables. In D. T. Gilbert, S. T. Fiske, &amp; G. Lindzey (Eds.),\u00a0<em>The handbook of social psychology<\/em>\u00a0(4th ed., pp. 323\u2013390). McGraw-Hill.<\/p>\n<p>Pennebaker, J. W., Mehl, M. R., &amp; Niederhoffer, K. G. (2003). Psychological aspects of natural language use: Our words, our selves.\u00a0<em>Annual Review of Psychology, 54<\/em>, 547\u2013577. <a href=\"https:\/\/doi.org\/10.1146\/annurev.psych.54.101601.145041\">https:\/\/doi.org\/10.1146\/annurev.psych.54.101601.145041<\/a><\/p>\n<p>Pennycook, G., &amp; Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning.\u00a0<em>Cognition, 188<\/em>, 39\u201350. <a href=\"https:\/\/doi.org\/10.1016\/j.cognition.2018.06.011\">https:\/\/doi.org\/10.1016\/j.cognition.2018.06.011<\/a><\/p>\n<p>Pluviano, S., Watt, C., &amp; Della Sala, S. (2024). Don\u2019t believe them! Reducing misinformation influence through credibility labeling.\u00a0<em>Psychological Science, 35<\/em>(8), 1324\u20131337.\u00a0<a href=\"https:\/\/doi.org\/10.1177\/0956797624124032\">https:\/\/doi.org\/10.1177\/0956797624124032<\/a><\/p>\n<p>PolitiFact. (2022, March 25). No evidence that US-funded labs in Ukraine are biological weapons facilities, despite Russian claims. <a href=\"https:\/\/www.politifact.com\/factchecks\/2022\/mar\/25\/tucker-carlson\/no-evidence-us-funded-labs-ukraine-are-biological-w\/\">https:\/\/www.politifact.com\/factchecks\/2022\/mar\/25\/tucker-carlson\/no-evidence-us-funded-labs-ukraine-are-biological-w\/<\/a><\/p>\n<p>Pomerantsev, P., &amp; Weiss, M. (2014).\u00a0<em>The menace of unreality: How the Kremlin weaponizes information, culture and money<\/em>. Institute of Modern Russia. <a href=\"https:\/\/imrussia.org\/media\/pdf\/Research\/Michael_Weiss_and_Peter_Pomerantsev__The_Menace_of_Unreality.pdf\">https:\/\/imrussia.org\/media\/pdf\/Research\/Michael_Weiss_and_Peter_Pomerantsev__The_Menace_of_Unreality.pdf<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Reuters. (2022, March 6). Russia says Ukraine hiding U.S.-funded bioweapons program. <a href=\"https:\/\/www.reuters.com\/world\/europe\/russia-says-ukraine-hiding-us-funded-bioweapons-programme-2022-03-06\/\">https:\/\/www.reuters.com\/world\/europe\/russia-says-ukraine-hiding-us-funded-bioweapons-programme-2022-03-06\/<\/a><\/p>\n<p>Roozenbeek, J., &amp; van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation.\u00a0<em>Palgrave Communications, 5<\/em>(1), 1\u201310. <a href=\"https:\/\/doi.org\/10.1057\/s41599-019-0279-9\">https:\/\/doi.org\/10.1057\/s41599-019-0279-9<\/a><\/p>\n<p>Scheufele, D. A., &amp; Krause, N. M. (2019). Science audiences, misinformation, and fake news.\u00a0<em>Proceedings of the National Academy of Sciences, 116<\/em>(16), 7662\u20137669. <a href=\"https:\/\/doi.org\/10.1073\/pnas.1805871115\">https:\/\/doi.org\/10.1073\/pnas.1805871115<\/a><\/p>\n<p>Shen, L., &amp; Bigsby, E. (2013). The effects of message features: Processing fluency and involvement. In J. P. Dillard &amp; L. Shen (Eds.),\u00a0<em>The SAGE handbook of persuasion: Developments in theory and practice<\/em>\u00a0(2nd ed., pp. 73\u201388). Sage.<\/p>\n<p>Szostek, J. (2017). The power and limits of Russia\u2019s strategic narrative in Ukraine: The role of linkage.\u00a0<em>Perspectives on Politics, 15<\/em>(2), 379\u2013395. <a href=\"https:\/\/doi.org\/10.1017\/S153759271700007\">https:\/\/doi.org\/10.1017\/S153759271700007<\/a><\/p>\n<p>Tashakkori, A., &amp; Teddlie, C. (2010).\u00a0<em>Sage handbook of mixed methods in social &amp; behavioral research<\/em>\u00a0(2nd ed.). Sage.<\/p>\n<p>Tandoc, E. C., Lim, Z. W., &amp; Ling, R. (2018). Defining \u201cfake news\u201d: A typology of scholarly definitions.\u00a0<em>Digital Journalism, 6<\/em>(2), 137\u2013153. <a href=\"https:\/\/doi.org\/10.1080\/21670811.2017.1360143\">https:\/\/doi.org\/10.1080\/21670811.2017.1360143<\/a><\/p>\n<p>Teddlie, C., &amp; Tashakkori, A. (2009).\u00a0<em>Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences<\/em>. Sage.<\/p>\n<p>Tormala, Z. L., &amp; Petty, R. E. (2004). Source credibility and attitude certainty: A metacognitive analysis of resistance to persuasion.\u00a0<em>Journal of Consumer Psychology, 14<\/em>(4), 427\u2013442. <a href=\"https:\/\/doi.org\/10.1207\/s15327663jcp1404_11\">https:\/\/doi.org\/10.1207\/s15327663jcp1404_11<\/a><\/p>\n<p>Thorson, E. (2016). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460\u2013480. https:\/\/doi.org\/10.1080\/10584609.2015.1102187<\/p>\n<p>Tversky, A., &amp; Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases.\u00a0<em>Science, 185<\/em>(4157), 1124\u20131131. https:\/\/doi.org\/10.1126\/science.185.4157.1124<\/p>\n<p>Tucker, J. A., Guess, A., Barber\u00e1, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., &amp; Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature.\u00a0<em>Hewlett Foundation<\/em>. https:\/\/www.hewlett.org\/wp-content\/uploads\/2018\/03\/Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdf<\/p>\n<p>Vraga, E. K., &amp; Bode, L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation.\u00a0<em>Political Communication, 37<\/em>(1), 136\u2013144. https:\/\/doi.org\/10.1080\/10584609.2019.1668896<\/p>\n<p>Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation.\u00a0<em>Journal of Communication, 65<\/em>(4), 699\u2013719. https:\/\/doi.org\/10.1111\/jcom.12164<\/p>\n<p>Weber, R. P. (1990).\u00a0<em>Basic content analysis<\/em>\u00a0(2nd ed.). Sage.<\/p>\n<p>Wilson, T., &amp; Starbird, K. (2020).\u00a0Cross-platform disinformation campaigns: Lessons learned and next steps.\u00a0<em>Harvard Kennedy School Misinformation Review, 1<\/em>(1), 1\u20139.\u00a0<a href=\"https:\/\/doi.org\/10.37016\/mr-2020-003\">https:\/\/doi.org\/10.37016\/mr-2020-003<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Douglas S. Wilbur Independent Researcher Abstract This study analyzes how Russian disinformation achieves persuasion through the interaction of linguistic, emotional, and structural mechanisms. Using quantitative content analysis of 260 verified texts, it identifies recurring patterns that link rhetorical form to cognitive bias activation. Repetition, emotional tone, and ambiguity consistently increase perceived credibility, while moral and &hellip;<\/p>\n","protected":false},"author":3,"featured_media":2660,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[360,104,359],"coauthors":[356],"class_list":["post-2648","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-s4-cognition","tag-cognitive-bias","tag-persuasion","tag-propaganda"],"_links":{"self":[{"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/posts\/2648","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/comments?post=2648"}],"version-history":[{"count":7,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/posts\/2648\/revisions"}],"predecessor-version":[{"id":2669,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/posts\/2648\/revisions\/2669"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/media\/2660"}],"wp:attachment":[{"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/media?parent=2648"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/categories?post=2648"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/tags?post=2648"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/mprcenter.org\/review\/wp-json\/wp\/v2\/coauthors?post=2648"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}