Misinformation, Fake News
& the Psychology of Belief
A rigorous academic guide for students and researchers exploring how false information is created, spread, and believed — and what cognitive science, communication theory, and critical media studies reveal about the modern epistemic crisis.
Defining the Landscape: Mis-, Dis- & Mal-information
IASNOVA.COMThe contemporary information environment demands precise terminology. Scholars of communication, political science, and cognitive psychology draw sharp distinctions between closely related concepts. Claire Wardle and Hossein Derakhshan (2017), in their landmark report for the Council of Europe, proposed a taxonomy that remains the most widely cited in academic literature.
The critical variable in any information disorder is intent. The same false claim can constitute misinformation (accidental) or disinformation (deliberate). This distinction has profound legal, ethical, and policy implications.
Why “Fake News” is a Contested Term
The phrase fake news entered mainstream discourse around the 2016 US presidential election, yet scholars are wary of using it unreflectively. The term has been politically weaponised — used by politicians to delegitimise accurate journalism they find unflattering. For this reason, many academics prefer the more precise vocabulary of “information disorder” (Wardle) or “epistemic pollution” (Floridi, 2016).
Wardle, C. & Derakhshan, H. (2017). Information Disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe Report DGI(2017)09. Essential foundation for any study of misinformation.
A Seven-Type Typology
| Type | Example | False? | Harmful? | Intentional? |
|---|---|---|---|---|
| Satire / Parody | The Onion mistaken as real | ✓ | Sometimes | ✗ |
| Misleading Content | Deceptive framing of facts | Partial | ✓ | Sometimes |
| Imposter Content | Fake BBC homepage | ✓ | ✓ | ✓ |
| Fabricated Content | Invented quotes from leaders | ✓ | ✓ | ✓ |
| False Connection | Headline doesn’t match story | Partial | ✓ | Sometimes |
| False Context | Real image, wrong caption | Partial | ✓ | Sometimes |
| Manipulated Content | Deepfakes, doctored images | ✓ | ✓ | ✓ |
The Misinformation Ecosystem
IASNOVA.COMMisinformation does not exist in a vacuum. It flows through a complex ecosystem involving producers, amplifiers, platforms, and audiences. Understanding the systemic architecture of this ecosystem is essential to designing effective interventions.
The Attention Economy Problem
Shoshana Zuboff’s concept of surveillance capitalism (2019) provides crucial context: digital platforms profit from maximising engagement, not accuracy. Emotionally arousing content — including false and outrage-inducing information — generates more clicks, shares, and time-on-platform than dry, factual reporting. This creates a structural incentive misalignment at the heart of the information ecosystem.
Vosoughi, Roy, and Aral’s landmark 2018 study in Science found that false news spreads significantly faster, farther, and more broadly than true news on Twitter, and that human behaviour — not bots — was the primary driver of this asymmetry.
False stories are 70% more likely to be retweeted than true stories. Novelty — the fact that false information tends to be more surprising — drives this pattern, exploiting human curiosity and emotional response systems. (Vosoughi, Roy & Aral, 2018, Science)
The Psychology of Belief Formation
IASNOVA.COMWhy do intelligent, educated people believe false information? The answer lies not in stupidity, but in the deep architecture of human cognition. Belief is not a passive reception of evidence; it is an active, social, emotional, and identity-laden process.
Dual-Process Theory (System 1 & System 2)
Daniel Kahneman’s Nobel Prize-winning dual-process framework, popularised in Thinking, Fast and Slow (2011), describes two modes of cognition:
Research by Gordon Pennycook and David Rand (2019) suggests that belief in fake news is primarily a failure of System 2 engagement — not a lack of intelligence, but a failure to pause and apply analytical reasoning. Critically, the solution involves promoting “cognitive reflection”: the disposition to slow down and think deliberately before accepting claims.
Social Identity & Motivated Reasoning
Ziva Kunda’s theory of motivated reasoning (1990) argues that people are “motivated to arrive at desired conclusions” and will unconsciously employ cognitive strategies to reach them. When misinformation aligns with group identity — political, religious, ethnic — the drive to accept it is powerfully amplified by social belonging and in-group loyalty.
“Our tribal psychology makes us enormously resistant to information that threatens our group identity, even when it comes from authoritative sources.”
— Dan Kahan, Cultural Cognition Project, Yale Law SchoolThe Illusory Truth Effect
One of the most alarming findings in misinformation research is the illusory truth effect (Hasher, Goldstein & Toppino, 1977; Pennycook et al., 2018): repeated exposure to a statement increases its perceived truth, regardless of actual accuracy. This is because repetition increases processing fluency — the ease with which the brain processes information — which is misattributed to truth.
Simply repeating a myth to debunk it can paradoxically strengthen it in readers’ minds. Effective fact-checking must therefore use strategic communication — leading with truth, not with the falsehood being corrected. This principle is central to the “truth sandwich” approach (Lakoff, 2017).
Cognitive Biases That Drive False Belief
IASNOVA.COMHuman cognition is riddled with systematic shortcuts — heuristics — that were adaptive in evolutionary environments but create vulnerabilities in the modern information landscape. Below is a curated map of the biases most directly implicated in belief in misinformation.
The tendency to search for and favour information that confirms existing beliefs. Identified by Peter Wason (1960), it is the single most studied cognitive bias in misinformation research.
Repeated exposure increases perceived truth regardless of accuracy. Particularly dangerous in social media environments where false claims circulate continuously (Hasher et al., 1977; Pennycook et al., 2018).
People deploy reasoning capacities not to find truth but to justify desired conclusions (Kunda, 1990). Intelligence can paradoxically intensify this bias — more capable people construct more sophisticated rationalisations.
We judge the likelihood of events by how easily examples come to mind. Vivid, emotional misinformation becomes cognitively “available” and feels more representative than it is (Tversky & Kahneman, 1973).
In some conditions, corrections can cause beliefs to become more extreme. While more recent research questions its universality (Wood & Porter, 2019), it operates powerfully around identity-relevant beliefs.
People with limited knowledge in a domain systematically overestimate their competence, making them resistant to expert correction and susceptible to confident-sounding but false information (Kruger & Dunning, 1999).
We apply different standards to claims from our group vs. others. Social identity theory (Tajfel & Turner, 1979) predicts we will uncritically accept claims supporting our group and reject those that challenge it.
Initial information disproportionately influences subsequent judgment. A headline, even if corrected later, anchors an impression. Corrections rarely travel as far as the original false claim.
Key Thinkers & Theoretical Frameworks
IASNOVA.COMThe study of misinformation draws on an unusually rich interdisciplinary tradition. The following thinkers represent the core intellectual lineage every student in this field should engage with.
In Public Opinion (1922), Lippmann argued that people do not respond to reality, but to the “pictures in their heads” — mental models shaped by media and experience. His concept of pseudo-environment prefigures modern filter bubble theory.
IASNOVA.COMHis framework distinguishes fast, intuitive System 1 from slow, analytical System 2 thinking. Misinformation exploits System 1 defaults. His 2011 book is foundational reading for understanding why humans are susceptible to false beliefs.
IASNOVA.COMPennycook and David Rand’s research demonstrates that susceptibility to fake news correlates with lower cognitive reflection, not lower intelligence. Their “nudge” interventions show that prompting analytic thinking before sharing dramatically reduces misinformation spread.
IASNOVA.COMExtended McGuire’s inoculation theory to misinformation, creating the Good News game and SIREN framework. His research demonstrates measurable psychological resistance to manipulation techniques after “prebunking” exposure.
IASNOVA.COMCo-author of the most influential framework for categorising information disorder (with Derakhshan, 2017), distinguishing misinformation, disinformation, and malinformation. Founder of First Draft, a pioneer in verification methods.
IASNOVA.COMCoined the term filter bubble (2011) to describe how personalisation algorithms create information cocoons, exposing users only to content reinforcing existing views and shielding them from challenging perspectives — the algorithmic architecture of epistemic closure.
IASNOVA.COMHer 2019 magnum opus argues that digital platforms extract human experience as raw material for behavioural prediction products — an economic logic that structurally incentivises engagement over accuracy, creating the material conditions for misinformation proliferation.
IASNOVA.COMBoyd’s work on networked publics and context collapse explains how social media flattens contextual norms, allowing information (and misinformation) to travel across audiences never intended by the original author. She advocates for critical media manipulation literacy over conventional media literacy.
IASNOVA.COMIntellectual Lineage: A Brief Timeline
Lippmann — Public Opinion
Establishes that mass publics operate on mediated mental models. Foundation of political communication theory.
McGuire — Inoculation Theory
Proposes that exposing people to weakened counterarguments builds resistance to persuasion — the original “prebunking.”
Tajfel & Turner — Social Identity Theory
Explains in-group loyalty as a core feature of group psychology, directly relevant to partisan belief in misinformation.
Kahneman — Thinking, Fast and Slow · Pariser — The Filter Bubble
Dual-process theory goes mainstream; filter bubble concept frames algorithmic personalisation as a civic danger.
Wardle & Derakhshan — Information Disorder Framework
The definitive taxonomic framework adopted by policy-makers, researchers, and fact-checkers globally.
Vosoughi, Roy & Aral — “The Spread of True and False News” (Science)
Empirically demonstrates false news spreads faster, farther, and more deeply than true news — driven by humans, not bots.
Van der Linden — Prebunking & the SIREN Model
Translates inoculation theory into scalable digital interventions (games, YouTube pre-rolls, platform prompts).
How Misinformation Spreads: Mechanisms & Vectors
IASNOVA.COMUnderstanding how false information travels is as important as understanding why people believe it. The spread of misinformation is shaped by platform architecture, social network topology, emotional contagion, and identity dynamics.
The Role of Emotional Arousal
Brady et al. (2017) found that moral-emotional language in tweets increases retweet rates by 20% per moral-emotional word used. Misinformation is often deliberately crafted to trigger high-arousal emotions — fear, outrage, disgust — which impair deliberate analytical processing and accelerate sharing before verification.
Echo Chambers vs. Filter Bubbles
Scholars draw an important distinction between filter bubbles (algorithmically curated isolation) and echo chambers (socially reinforced homophily). Bruns (2019) and Guess et al. (2018) find mixed evidence for filter bubbles as structural phenomena, but echo chambers — social groups where views are self-reinforced through selective association — are robustly documented.
While echo chambers are real, exposure to misinformation is often not confined to ideological bubbles. Research by Guess, Nagler & Tucker (2019) found that sharing of fake news on Facebook was concentrated among a small minority of older, conservative users — not uniformly distributed across the population.
Inoculation Theory & Prebunking Approaches
IASNOVA.COMOne of the most promising developments in misinformation research is the application of inoculation theory to building psychological resilience against false information. Originally proposed by William McGuire (1964) in the context of attitude change, the theory has been powerfully extended by Sander van der Linden and colleagues at Cambridge University.
The SIREN Framework (Van der Linden)
Van der Linden’s SIREN model identifies five manipulation techniques that cut across specific topics — meaning prebunking these techniques (rather than individual claims) provides broader protection:
| Letter | Technique | Example |
|---|---|---|
| S | Strawman arguments | Misrepresenting an opponent’s position to make it easier to attack |
| I | Impersonating experts | Fake credentials, fabricated endorsements |
| R | Rank-pulling | Ad hominem attacks on critics’ credibility |
| E | Emotional language | Triggering fear/anger to bypass analytical reasoning |
| N | Non-sequitur reasoning | Claims that don’t logically follow from evidence |
Van der Linden’s team created the Bad News game (getbadnews.com), where players roleplay as fake news creators, learning manipulation techniques firsthand. Studies show it significantly increases users’ ability to spot misinformation. This approach has been scaled to YouTube pre-roll advertisements reaching millions of users.
A Critical Thinking Framework for Students
IASNOVA.COMTheoretical knowledge must translate into practical skills. The following framework synthesises evidence-based approaches from fact-checking organisations, cognitive scientists, and media literacy educators into actionable habits for evaluating information.
The SIFT Method (Mike Caulfield)
STOP — Pause before engaging
Before reading, liking, or sharing, pause. Ask: Am I being manipulated emotionally? Am I in an automatic System 1 state? The physical act of stopping breaks the sharing reflex and activates analytical processing.
INVESTIGATE the source
Don’t start reading the article — first look up the source. Use Wikipedia, fact-checking sites, or a quick web search to understand who is behind the content and what their track record and motivations are.
FIND better coverage
For important claims, find multiple credible sources reporting the same information. If only one outlet is running the story — especially a partisan or fringe outlet — treat this as a red flag.
TRACE claims to their origin
Many misleading stories use real quotes, studies, or images — but out of context. Trace any cited study, quote, or image back to its primary source to verify whether the original context supports the claim being made.
The Role of Media Literacy Education
Finland’s national media literacy curriculum, introduced in 2014 and consistently ranked as the world’s most effective, offers a systemic model: integrating critical information skills across all subjects from primary school onwards, rather than treating them as a standalone add-on. Research by the Reuters Institute (2020) confirms that media literacy significantly reduces susceptibility to misinformation, with the most effective programmes combining inoculation approaches with practical verification skills.
The Debunking Handbook (Cook & Lewandowsky, 2020) synthesises evidence-based best practices: (1) Lead with the truth, not the myth. (2) Flag the myth briefly before refuting it. (3) Explain the manipulative technique used. (4) Provide an alternative, coherent narrative to fill the explanatory gap. (5) Repeat the corrected fact, not the myth.
Frequently Asked Questions
IASNOVA.COMWhat is the difference between misinformation and disinformation?
Misinformation refers to false or inaccurate information spread without deliberate intent to deceive — for example, an individual sharing a false story because they genuinely believe it. Disinformation, by contrast, is false information deliberately created and spread to deceive, manipulate, or harm. The key distinction is intent. A third category — malinformation — consists of true information shared with harmful intent. This three-part taxonomy was formalised by Wardle and Derakhshan (2017).
Why do intelligent people believe fake news?
Intelligence does not immunise against misinformation — and may sometimes intensify susceptibility. High-intelligence individuals may be better at constructing sophisticated rationalisations for beliefs that are identity-driven (a phenomenon called “smart motivated reasoning” or “identity-protective cognition,” studied by Dan Kahan). The key vulnerability is not cognitive capacity but cognitive style: whether people are disposed to engage deliberate, analytical thinking before accepting information. Emotional arousal (triggered by sensational content), social identity pressures, and algorithmic echo chambers all reduce the likelihood of engaging analytical processing, regardless of intelligence.
What is the illusory truth effect and why does it matter?
The illusory truth effect is the robust finding that repeated exposure to a statement increases its perceived truth, regardless of actual accuracy. First documented by Hasher, Goldstein and Toppino (1977), it has been extensively replicated, including showing effects even when participants know the statement was previously labelled false (Fazio et al., 2015). In social media environments where false claims are shared and reshared millions of times, repetition creates an insidious credibility effect. This is why corrections must be carefully designed not to amplify the false claim through repetition — fact-checkers should repeat the truth, not the myth.
What is inoculation theory and does it work?
Inoculation theory, originally proposed by William McGuire (1964), proposes that exposing people to a weakened, refuted form of a persuasive argument builds psychological resistance to encountering that argument in full-strength form later — analogous to a vaccine. Extended to misinformation by Sander van der Linden and colleagues, inoculation (or “prebunking”) involves (a) warning about manipulation, (b) exposing a weakened version of the misinformation technique, and (c) providing refutation. Multiple randomised controlled trials demonstrate significant and durable effects. The approach has been scaled via browser games, YouTube pre-roll ads, and in-platform prompts.
How does social media amplify misinformation?
Social media amplifies misinformation through several mechanisms: (1) Algorithmic amplification — platforms prioritise content that maximises engagement; emotionally charged misinformation often outperforms factual content. (2) Zero-friction sharing — the ease of retweeting or forwarding means content can spread before verification. (3) Social proof — seeing that something has been widely shared creates a false impression of credibility. (4) Echo chambers — homophilous social networks ensure misinformation circulates among those already predisposed to believe it. (5) Anonymity and accountability gaps — reduced social accountability lowers the threshold for spreading unverified claims.
Is fact-checking effective at reducing misinformation?
The evidence is mixed but cautiously optimistic. Studies show that fact-checks can reduce belief in specific false claims among those who encounter them (Nyhan et al., 2020). However, fact-checks face several structural challenges: (a) they rarely reach the same audience as the original claim; (b) corrections travel slower than falsehoods (Vosoughi et al., 2018); (c) the backfire effect can reinforce beliefs in identity-relevant cases. Emerging evidence suggests that prebunking (preventing exposure to misinformation) is more effective than debunking (correcting after-the-fact). The most effective fact-checking combines correction with attention to the source’s manipulation techniques.
What is a filter bubble and does it really exist?
A filter bubble (Pariser, 2011) is the personalised informational environment created by algorithmic curation, in which users are shown content reflecting their existing interests and views. The empirical evidence for filter bubbles is more nuanced than the concept suggests: Guess, Barberá and colleagues (2018) found that while personalisation does reduce cross-cutting exposure, most users are exposed to a diverse range of sources. The concern may be less about algorithmic filter bubbles and more about socially self-selected echo chambers — people actively choosing to follow ideologically homogeneous networks. Both are real but distinct phenomena with different intervention implications.
Core Bibliography & Further Reading
IASNOVA.COMEssential Texts
| Author(s) | Work | Year | Significance |
|---|---|---|---|
| Kahneman, D. | Thinking, Fast and Slow | 2011 | Foundational dual-process theory |
| Wardle & Derakhshan | Information Disorder (CoE Report) | 2017 | Definitive taxonomy of information disorder |
| Vosoughi, Roy & Aral | “The Spread of True and False News Online” (Science) | 2018 | Landmark empirical study of news spread |
| Pennycook & Rand | “Lazy, Not Biased” (Cognition) | 2019 | Analytical thinking vs. partisan bias in fake news |
| Zuboff, S. | The Age of Surveillance Capitalism | 2019 | Structural economic basis of misinformation ecosystem |
| Cook & Lewandowsky | The Debunking Handbook (2nd ed.) | 2020 | Evidence-based guide to effective corrections |
| Van der Linden, S. | Foolproof | 2023 | Accessible account of inoculation theory & practice |
Key Journals
Students should monitor: Misinformation Review (Harvard Kennedy School); Journal of Communication; Political Communication; Cognition; Nature Human Behaviour; PNAS; and the Reuters Institute Digital News Report (annual).
First Draft (firstdraftnews.org) · Full Fact (fullfact.org) · Snopes · PolitiFact · Media Bias / Fact Check (mediabiasfactcheck.com) · Cambridge Social Decision-Making Lab (psychologyofmisinformation.com) · MediaWise (Poynter) · IFCN Code of Principles
