Misinformation, Fake News & the Psychology of Belief: A Complete Academic Guide (2026)

Confirmation bias, filter bubbles, illusory truth — discover the cognitive science behind why fake news spreads and how to protect your thinking. Expert academic guide.

Misinformation, Fake News & Psychology of Belief | Academic Guide | IASNOVA
Academic Module · Media Studies & Cognitive Psychology

Misinformation, Fake News
& the Psychology of Belief

A rigorous academic guide for students and researchers exploring how false information is created, spread, and believed — and what cognitive science, communication theory, and critical media studies reveal about the modern epistemic crisis.

In-depth Theory Key Thinkers Cognitive Biases Visual Diagrams Critical Frameworks FAQs Included
Module 01

Defining the Landscape: Mis-, Dis- & Mal-information

IASNOVA.COM

The contemporary information environment demands precise terminology. Scholars of communication, political science, and cognitive psychology draw sharp distinctions between closely related concepts. Claire Wardle and Hossein Derakhshan (2017), in their landmark report for the Council of Europe, proposed a taxonomy that remains the most widely cited in academic literature.

⚠ Core Distinction

The critical variable in any information disorder is intent. The same false claim can constitute misinformation (accidental) or disinformation (deliberate). This distinction has profound legal, ethical, and policy implications.

MISINFORMATION FALSE · NO INTENT TO HARM • Satire mistaken as fact • False context / old images • Inaccurate reporting • Misleading framing Wardle & Derakhshan, 2017 DISINFORMATION FALSE · DELIBERATE HARM • State-sponsored propaganda • Fabricated quotes / images • Coordinated inauthentic beh. • Astroturfing campaigns EUvsDisinfo; Pennycook 2021 MALINFORMATION TRUE · HARMFUL INTENT • Privacy violations • Out-of-context truth • Doxxing / harassment • Weaponised leaks Wardle, 2019 INFORMATION DISORDER TAXONOMY · WARDLE & DERAKHSHAN (2017)
Fig 1.1 — The Three Categories of Information Disorder

Why “Fake News” is a Contested Term

The phrase fake news entered mainstream discourse around the 2016 US presidential election, yet scholars are wary of using it unreflectively. The term has been politically weaponised — used by politicians to delegitimise accurate journalism they find unflattering. For this reason, many academics prefer the more precise vocabulary of “information disorder” (Wardle) or “epistemic pollution” (Floridi, 2016).

📚 Key Reading

Wardle, C. & Derakhshan, H. (2017). Information Disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe Report DGI(2017)09. Essential foundation for any study of misinformation.

A Seven-Type Typology

TypeExampleFalse?Harmful?Intentional?
Satire / ParodyThe Onion mistaken as realSometimes
Misleading ContentDeceptive framing of factsPartialSometimes
Imposter ContentFake BBC homepage
Fabricated ContentInvented quotes from leaders
False ConnectionHeadline doesn’t match storyPartialSometimes
False ContextReal image, wrong captionPartialSometimes
Manipulated ContentDeepfakes, doctored images
IASNOVA.COM
Module 02

The Misinformation Ecosystem

IASNOVA.COM

Misinformation does not exist in a vacuum. It flows through a complex ecosystem involving producers, amplifiers, platforms, and audiences. Understanding the systemic architecture of this ecosystem is essential to designing effective interventions.

CREATORS State actors · bots Trolls · partisans SOCIAL MEDIA Algorithms · virality Echo chambers FRINGE MEDIA Conspiracy sites Partisan outlets PLATFORMS Recommendation engines Monetisation models AUDIENCE Cognitive biases Social identity ENGAGEMENT FEEDBACK LOOP — Audience reaction shapes algorithmic amplification THE MISINFORMATION ECOSYSTEM ADAPTED FROM WARDLE (2019) & BENKLER ET AL. (2018)
Fig 2.1 — Information Flow & Feedback in the Misinformation Ecosystem

The Attention Economy Problem

Shoshana Zuboff’s concept of surveillance capitalism (2019) provides crucial context: digital platforms profit from maximising engagement, not accuracy. Emotionally arousing content — including false and outrage-inducing information — generates more clicks, shares, and time-on-platform than dry, factual reporting. This creates a structural incentive misalignment at the heart of the information ecosystem.

Vosoughi, Roy, and Aral’s landmark 2018 study in Science found that false news spreads significantly faster, farther, and more broadly than true news on Twitter, and that human behaviour — not bots — was the primary driver of this asymmetry.

📊 Key Finding

False stories are 70% more likely to be retweeted than true stories. Novelty — the fact that false information tends to be more surprising — drives this pattern, exploiting human curiosity and emotional response systems. (Vosoughi, Roy & Aral, 2018, Science)

IASNOVA.COM
Module 03

The Psychology of Belief Formation

IASNOVA.COM

Why do intelligent, educated people believe false information? The answer lies not in stupidity, but in the deep architecture of human cognition. Belief is not a passive reception of evidence; it is an active, social, emotional, and identity-laden process.

Dual-Process Theory (System 1 & System 2)

Daniel Kahneman’s Nobel Prize-winning dual-process framework, popularised in Thinking, Fast and Slow (2011), describes two modes of cognition:

SYSTEM 1 — Fast Thinking ⚡ Automatic, effortless, unconscious ⚡ Emotionally driven and heuristic-based ⚡ Pattern recognition and intuition ⚡ Susceptible to cognitive biases ⚡ Misinformation exploits this system Kahneman, 2011 · Evans, 2008 SYSTEM 2 — Slow Thinking 🔬 Deliberate, effortful, conscious 🔬 Logical reasoning and analysis 🔬 Overrides automatic responses 🔬 Requires motivation and capacity 🔬 Can correct System 1 errors (if engaged) Pennycook & Rand, 2019 · Stanovich, 2009
Fig 3.1 — Kahneman’s Dual-Process Theory Applied to Misinformation

Research by Gordon Pennycook and David Rand (2019) suggests that belief in fake news is primarily a failure of System 2 engagement — not a lack of intelligence, but a failure to pause and apply analytical reasoning. Critically, the solution involves promoting “cognitive reflection”: the disposition to slow down and think deliberately before accepting claims.

Social Identity & Motivated Reasoning

Ziva Kunda’s theory of motivated reasoning (1990) argues that people are “motivated to arrive at desired conclusions” and will unconsciously employ cognitive strategies to reach them. When misinformation aligns with group identity — political, religious, ethnic — the drive to accept it is powerfully amplified by social belonging and in-group loyalty.

“Our tribal psychology makes us enormously resistant to information that threatens our group identity, even when it comes from authoritative sources.”

— Dan Kahan, Cultural Cognition Project, Yale Law School

The Illusory Truth Effect

One of the most alarming findings in misinformation research is the illusory truth effect (Hasher, Goldstein & Toppino, 1977; Pennycook et al., 2018): repeated exposure to a statement increases its perceived truth, regardless of actual accuracy. This is because repetition increases processing fluency — the ease with which the brain processes information — which is misattributed to truth.

⚠ Implication

Simply repeating a myth to debunk it can paradoxically strengthen it in readers’ minds. Effective fact-checking must therefore use strategic communication — leading with truth, not with the falsehood being corrected. This principle is central to the “truth sandwich” approach (Lakoff, 2017).

IASNOVA.COM
Module 04

Cognitive Biases That Drive False Belief

IASNOVA.COM

Human cognition is riddled with systematic shortcuts — heuristics — that were adaptive in evolutionary environments but create vulnerabilities in the modern information landscape. Below is a curated map of the biases most directly implicated in belief in misinformation.

COGNITIVE BIASES & BELIEF Confirmation Bias Seek confirming evidence Illusory Truth Effect Repetition = perceived truth Motivated Reasoning Desired conclusions override evidence Dunning-Kruger Effect Overconfidence in limited knowledge Backfire Effect Corrections entrench false beliefs In-group Bias Trust group members over outsiders
Fig 4.1 — Key Cognitive Biases in the Misinformation Vulnerability System
Confirmation Bias

The tendency to search for and favour information that confirms existing beliefs. Identified by Peter Wason (1960), it is the single most studied cognitive bias in misinformation research.

Illusory Truth Effect

Repeated exposure increases perceived truth regardless of accuracy. Particularly dangerous in social media environments where false claims circulate continuously (Hasher et al., 1977; Pennycook et al., 2018).

Motivated Reasoning

People deploy reasoning capacities not to find truth but to justify desired conclusions (Kunda, 1990). Intelligence can paradoxically intensify this bias — more capable people construct more sophisticated rationalisations.

Availability Heuristic

We judge the likelihood of events by how easily examples come to mind. Vivid, emotional misinformation becomes cognitively “available” and feels more representative than it is (Tversky & Kahneman, 1973).

The Backfire Effect

In some conditions, corrections can cause beliefs to become more extreme. While more recent research questions its universality (Wood & Porter, 2019), it operates powerfully around identity-relevant beliefs.

Dunning-Kruger Effect

People with limited knowledge in a domain systematically overestimate their competence, making them resistant to expert correction and susceptible to confident-sounding but false information (Kruger & Dunning, 1999).

In-group / Out-group Bias

We apply different standards to claims from our group vs. others. Social identity theory (Tajfel & Turner, 1979) predicts we will uncritically accept claims supporting our group and reject those that challenge it.

Anchoring Bias

Initial information disproportionately influences subsequent judgment. A headline, even if corrected later, anchors an impression. Corrections rarely travel as far as the original false claim.

IASNOVA.COM
Module 05

Key Thinkers & Theoretical Frameworks

IASNOVA.COM

The study of misinformation draws on an unusually rich interdisciplinary tradition. The following thinkers represent the core intellectual lineage every student in this field should engage with.

Walter Lippmann
1889–1974 · Political Journalist & Theorist
Stereotypes & Public Opinion Theory

In Public Opinion (1922), Lippmann argued that people do not respond to reality, but to the “pictures in their heads” — mental models shaped by media and experience. His concept of pseudo-environment prefigures modern filter bubble theory.

IASNOVA.COM
Daniel Kahneman
1934– · Nobel Laureate, Behavioural Economics
Dual-Process Theory (System 1 & 2)

His framework distinguishes fast, intuitive System 1 from slow, analytical System 2 thinking. Misinformation exploits System 1 defaults. His 2011 book is foundational reading for understanding why humans are susceptible to false beliefs.

IASNOVA.COM
Gordon Pennycook
1987– · Cognitive Psychologist, U. of Regina
Analytical Thinking & Fake News Susceptibility

Pennycook and David Rand’s research demonstrates that susceptibility to fake news correlates with lower cognitive reflection, not lower intelligence. Their “nudge” interventions show that prompting analytic thinking before sharing dramatically reduces misinformation spread.

IASNOVA.COM
Sander van der Linden
1989– · Social Psychologist, Cambridge
Inoculation Theory & Prebunking

Extended McGuire’s inoculation theory to misinformation, creating the Good News game and SIREN framework. His research demonstrates measurable psychological resistance to manipulation techniques after “prebunking” exposure.

IASNOVA.COM
Claire Wardle
1977– · Communication Researcher, Brown U.
Information Disorder Taxonomy

Co-author of the most influential framework for categorising information disorder (with Derakhshan, 2017), distinguishing misinformation, disinformation, and malinformation. Founder of First Draft, a pioneer in verification methods.

IASNOVA.COM
Eli Pariser
1980– · Author & Internet Theorist
Filter Bubble Theory

Coined the term filter bubble (2011) to describe how personalisation algorithms create information cocoons, exposing users only to content reinforcing existing views and shielding them from challenging perspectives — the algorithmic architecture of epistemic closure.

IASNOVA.COM
Shoshana Zuboff
1951– · Philosopher, Harvard Business School
Surveillance Capitalism

Her 2019 magnum opus argues that digital platforms extract human experience as raw material for behavioural prediction products — an economic logic that structurally incentivises engagement over accuracy, creating the material conditions for misinformation proliferation.

IASNOVA.COM
Danah Boyd
1977– · Principal Researcher, Microsoft Research
Context Collapse & Critical Literacy

Boyd’s work on networked publics and context collapse explains how social media flattens contextual norms, allowing information (and misinformation) to travel across audiences never intended by the original author. She advocates for critical media manipulation literacy over conventional media literacy.

IASNOVA.COM

Intellectual Lineage: A Brief Timeline

1922

Lippmann — Public Opinion

Establishes that mass publics operate on mediated mental models. Foundation of political communication theory.

1964

McGuire — Inoculation Theory

Proposes that exposing people to weakened counterarguments builds resistance to persuasion — the original “prebunking.”

1979

Tajfel & Turner — Social Identity Theory

Explains in-group loyalty as a core feature of group psychology, directly relevant to partisan belief in misinformation.

2011

Kahneman — Thinking, Fast and Slow · Pariser — The Filter Bubble

Dual-process theory goes mainstream; filter bubble concept frames algorithmic personalisation as a civic danger.

2017

Wardle & Derakhshan — Information Disorder Framework

The definitive taxonomic framework adopted by policy-makers, researchers, and fact-checkers globally.

2018

Vosoughi, Roy & Aral — “The Spread of True and False News” (Science)

Empirically demonstrates false news spreads faster, farther, and more deeply than true news — driven by humans, not bots.

2019–

Van der Linden — Prebunking & the SIREN Model

Translates inoculation theory into scalable digital interventions (games, YouTube pre-rolls, platform prompts).

IASNOVA.COM
Module 06

How Misinformation Spreads: Mechanisms & Vectors

IASNOVA.COM

Understanding how false information travels is as important as understanding why people believe it. The spread of misinformation is shaped by platform architecture, social network topology, emotional contagion, and identity dynamics.

FALSE CLAIM Origin Emotional Contagion Fear · Anger · Joy Social Norms Sharing = belonging Algorithm Amplif. Engagement metrics Network Clusters Cross-platform Migration MASS BELIEF Millions exposed MISINFORMATION SPREAD — FROM ORIGIN TO MASS BELIEF
Fig 6.1 — Mechanisms of Misinformation Propagation (Network Model)

The Role of Emotional Arousal

Brady et al. (2017) found that moral-emotional language in tweets increases retweet rates by 20% per moral-emotional word used. Misinformation is often deliberately crafted to trigger high-arousal emotions — fear, outrage, disgust — which impair deliberate analytical processing and accelerate sharing before verification.

Echo Chambers vs. Filter Bubbles

Scholars draw an important distinction between filter bubbles (algorithmically curated isolation) and echo chambers (socially reinforced homophily). Bruns (2019) and Guess et al. (2018) find mixed evidence for filter bubbles as structural phenomena, but echo chambers — social groups where views are self-reinforced through selective association — are robustly documented.

📌 Research Nuance

While echo chambers are real, exposure to misinformation is often not confined to ideological bubbles. Research by Guess, Nagler & Tucker (2019) found that sharing of fake news on Facebook was concentrated among a small minority of older, conservative users — not uniformly distributed across the population.

IASNOVA.COM
Module 07

Inoculation Theory & Prebunking Approaches

IASNOVA.COM

One of the most promising developments in misinformation research is the application of inoculation theory to building psychological resilience against false information. Originally proposed by William McGuire (1964) in the context of attitude change, the theory has been powerfully extended by Sander van der Linden and colleagues at Cambridge University.

1. WARN Alert to impending manipulation 2. EXPOSE Weakened form of false argument 3. REFUTE Provide counter- arguments & facts 4. BUILD Psychological resistance RESILIENT BELIEF Inoculated THE INOCULATION PROCESS (McGuire 1964 · Van der Linden 2017) Analogous to medical vaccination: controlled exposure + immune response = future resistance
Fig 7.1 — The Inoculation / Prebunking Process

The SIREN Framework (Van der Linden)

Van der Linden’s SIREN model identifies five manipulation techniques that cut across specific topics — meaning prebunking these techniques (rather than individual claims) provides broader protection:

LetterTechniqueExample
SStrawman argumentsMisrepresenting an opponent’s position to make it easier to attack
IImpersonating expertsFake credentials, fabricated endorsements
RRank-pullingAd hominem attacks on critics’ credibility
EEmotional languageTriggering fear/anger to bypass analytical reasoning
NNon-sequitur reasoningClaims that don’t logically follow from evidence
🎮 Applied Innovation

Van der Linden’s team created the Bad News game (getbadnews.com), where players roleplay as fake news creators, learning manipulation techniques firsthand. Studies show it significantly increases users’ ability to spot misinformation. This approach has been scaled to YouTube pre-roll advertisements reaching millions of users.

IASNOVA.COM
Module 08

A Critical Thinking Framework for Students

IASNOVA.COM

Theoretical knowledge must translate into practical skills. The following framework synthesises evidence-based approaches from fact-checking organisations, cognitive scientists, and media literacy educators into actionable habits for evaluating information.

The SIFT Method (Mike Caulfield)

S

STOP — Pause before engaging

Before reading, liking, or sharing, pause. Ask: Am I being manipulated emotionally? Am I in an automatic System 1 state? The physical act of stopping breaks the sharing reflex and activates analytical processing.

I

INVESTIGATE the source

Don’t start reading the article — first look up the source. Use Wikipedia, fact-checking sites, or a quick web search to understand who is behind the content and what their track record and motivations are.

F

FIND better coverage

For important claims, find multiple credible sources reporting the same information. If only one outlet is running the story — especially a partisan or fringe outlet — treat this as a red flag.

T

TRACE claims to their origin

Many misleading stories use real quotes, studies, or images — but out of context. Trace any cited study, quote, or image back to its primary source to verify whether the original context supports the claim being made.

❌ VERTICAL READING Naive approach · How non-experts typically read → Read the article in full → Evaluate based on internal consistency → Assess author credentials from the site itself → Easily misled by credible-looking design → Susceptible to deceptive framing How misinformation wins ✓ LATERAL READING Expert approach · How fact-checkers read → Immediately leave the site to check source → Search source name + “bias” / “credibility” → Use Wikipedia, AllSides, Media Bias Fact Check → Find what other credible sources are saying → Then (and only then) read the article How misinformation is defeated
Fig 8.1 — Vertical Reading vs. Lateral Reading Strategies (Caulfield, 2017)

The Role of Media Literacy Education

Finland’s national media literacy curriculum, introduced in 2014 and consistently ranked as the world’s most effective, offers a systemic model: integrating critical information skills across all subjects from primary school onwards, rather than treating them as a standalone add-on. Research by the Reuters Institute (2020) confirms that media literacy significantly reduces susceptibility to misinformation, with the most effective programmes combining inoculation approaches with practical verification skills.

📌 Debunking Best Practices

The Debunking Handbook (Cook & Lewandowsky, 2020) synthesises evidence-based best practices: (1) Lead with the truth, not the myth. (2) Flag the myth briefly before refuting it. (3) Explain the manipulative technique used. (4) Provide an alternative, coherent narrative to fill the explanatory gap. (5) Repeat the corrected fact, not the myth.

IASNOVA.COM
Module 09

Frequently Asked Questions

IASNOVA.COM
What is the difference between misinformation and disinformation?

Misinformation refers to false or inaccurate information spread without deliberate intent to deceive — for example, an individual sharing a false story because they genuinely believe it. Disinformation, by contrast, is false information deliberately created and spread to deceive, manipulate, or harm. The key distinction is intent. A third category — malinformation — consists of true information shared with harmful intent. This three-part taxonomy was formalised by Wardle and Derakhshan (2017).

Why do intelligent people believe fake news?

Intelligence does not immunise against misinformation — and may sometimes intensify susceptibility. High-intelligence individuals may be better at constructing sophisticated rationalisations for beliefs that are identity-driven (a phenomenon called “smart motivated reasoning” or “identity-protective cognition,” studied by Dan Kahan). The key vulnerability is not cognitive capacity but cognitive style: whether people are disposed to engage deliberate, analytical thinking before accepting information. Emotional arousal (triggered by sensational content), social identity pressures, and algorithmic echo chambers all reduce the likelihood of engaging analytical processing, regardless of intelligence.

What is the illusory truth effect and why does it matter?

The illusory truth effect is the robust finding that repeated exposure to a statement increases its perceived truth, regardless of actual accuracy. First documented by Hasher, Goldstein and Toppino (1977), it has been extensively replicated, including showing effects even when participants know the statement was previously labelled false (Fazio et al., 2015). In social media environments where false claims are shared and reshared millions of times, repetition creates an insidious credibility effect. This is why corrections must be carefully designed not to amplify the false claim through repetition — fact-checkers should repeat the truth, not the myth.

What is inoculation theory and does it work?

Inoculation theory, originally proposed by William McGuire (1964), proposes that exposing people to a weakened, refuted form of a persuasive argument builds psychological resistance to encountering that argument in full-strength form later — analogous to a vaccine. Extended to misinformation by Sander van der Linden and colleagues, inoculation (or “prebunking”) involves (a) warning about manipulation, (b) exposing a weakened version of the misinformation technique, and (c) providing refutation. Multiple randomised controlled trials demonstrate significant and durable effects. The approach has been scaled via browser games, YouTube pre-roll ads, and in-platform prompts.

How does social media amplify misinformation?

Social media amplifies misinformation through several mechanisms: (1) Algorithmic amplification — platforms prioritise content that maximises engagement; emotionally charged misinformation often outperforms factual content. (2) Zero-friction sharing — the ease of retweeting or forwarding means content can spread before verification. (3) Social proof — seeing that something has been widely shared creates a false impression of credibility. (4) Echo chambers — homophilous social networks ensure misinformation circulates among those already predisposed to believe it. (5) Anonymity and accountability gaps — reduced social accountability lowers the threshold for spreading unverified claims.

Is fact-checking effective at reducing misinformation?

The evidence is mixed but cautiously optimistic. Studies show that fact-checks can reduce belief in specific false claims among those who encounter them (Nyhan et al., 2020). However, fact-checks face several structural challenges: (a) they rarely reach the same audience as the original claim; (b) corrections travel slower than falsehoods (Vosoughi et al., 2018); (c) the backfire effect can reinforce beliefs in identity-relevant cases. Emerging evidence suggests that prebunking (preventing exposure to misinformation) is more effective than debunking (correcting after-the-fact). The most effective fact-checking combines correction with attention to the source’s manipulation techniques.

What is a filter bubble and does it really exist?

A filter bubble (Pariser, 2011) is the personalised informational environment created by algorithmic curation, in which users are shown content reflecting their existing interests and views. The empirical evidence for filter bubbles is more nuanced than the concept suggests: Guess, Barberá and colleagues (2018) found that while personalisation does reduce cross-cutting exposure, most users are exposed to a diverse range of sources. The concern may be less about algorithmic filter bubbles and more about socially self-selected echo chambers — people actively choosing to follow ideologically homogeneous networks. Both are real but distinct phenomena with different intervention implications.

IASNOVA.COM
Module 10

Core Bibliography & Further Reading

IASNOVA.COM

Essential Texts

Author(s)WorkYearSignificance
Kahneman, D.Thinking, Fast and Slow2011Foundational dual-process theory
Wardle & DerakhshanInformation Disorder (CoE Report)2017Definitive taxonomy of information disorder
Vosoughi, Roy & Aral“The Spread of True and False News Online” (Science)2018Landmark empirical study of news spread
Pennycook & Rand“Lazy, Not Biased” (Cognition)2019Analytical thinking vs. partisan bias in fake news
Zuboff, S.The Age of Surveillance Capitalism2019Structural economic basis of misinformation ecosystem
Cook & LewandowskyThe Debunking Handbook (2nd ed.)2020Evidence-based guide to effective corrections
Van der Linden, S.Foolproof2023Accessible account of inoculation theory & practice

Key Journals

Students should monitor: Misinformation Review (Harvard Kennedy School); Journal of Communication; Political Communication; Cognition; Nature Human Behaviour; PNAS; and the Reuters Institute Digital News Report (annual).

🌐 Online Resources

First Draft (firstdraftnews.org) · Full Fact (fullfact.org) · Snopes · PolitiFact · Media Bias / Fact Check (mediabiasfactcheck.com) · Cambridge Social Decision-Making Lab (psychologyofmisinformation.com) · MediaWise (Poynter) · IFCN Code of Principles

IASNOVA.COM
IASNOVA Academic Resource Series  ·  Misinformation, Fake News & Psychology of Belief
All content is for educational purposes. Cite as: IASNOVA Academic Module, 2025. This guide synthesises peer-reviewed scholarship and does not constitute original research. All referenced works belong to their respective authors and publishers.
Sections: Definitions · Ecosystem · Psychology · Biases · Thinkers · Spread · Inoculation · Critical Thinking · FAQ · Bibliography
IASNOVA.COM — Academic Intelligence Resources
Share this post:

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.