
Europe faces a decisive phase in its fight against disinformation, and 2025 has made one thing clear: the challenge is no longer defined only by falsehoods circulating online, but by a deeper transformation in how journalism, technology and public trust interact.
Across newsroom analysis and media commentary, a common thread has emerged over the past year: AI is reshaping journalism faster than institutions, audiences and policy frameworks are adapting, with profound implications for trust.
Acceleration tech change
The Reuters Institute’s reflection on how 2025 shaped journalism points to a year marked by overlapping pressures: accelerating technological change, economic strain in newsrooms, and rising expectations for journalism to act as a stabilising force in fragmented information environments.
Rather than a single disruption, the Institute describes a cumulative shift in how news is produced, distributed and consumed, with trust increasingly becoming the defining currency of the profession.
Yet resistance to AI within journalism is far from uniform. While journalists often express concern about AI threatening jobs, ethics or quality, many are already using AI tools in practice – for transcription, research support or content optimisation.
This gap between public scepticism and private adoption illustrates a broader trust dilemma: AI is becoming embedded before shared norms around its use are fully established.
(con)text
Concerns around trust are particularly acute when AI use affects how events are represented. “When the use of AI leads to misrepresentation in text or visuals, audience trust can be lost very quickly – whether this stems from technological failures or irresponsible use by journalists,” CEPS associate researcher Paula Gürtler told Euractiv.
From an ethics perspective, the way AI is adopted in journalism can also send powerful signals to audiences. “If journalists use generative AI systems that were trained illegally on journalistic content, this can send a signal of legitimacy to the public: if even journalists do not care that their data is used without consent, why should anyone else?”, Gürtler added.
She warns that similar tensions arise when AI-generated visuals enter news production, potentially undermining the struggle for fair pay and decent working conditions for photojournalists.
While many media organisations have introduced internal policies to increase transparency around AI use, Gürtler cautions that rules alone cannot cover every scenario.
“There are many grey areas, and each case comes with particular considerations that cannot be fully addressed through organisational policies alone,” she says, arguing for stronger critical AI media literacy among journalists, including an understanding of the politics embedded in AI systems.
A question of trust
The trust question sits at the heart of its ‘Generative AI and News Report 2025’, which examines how audiences and journalists perceive AI’s role in journalism.
The report shows widespread public ambivalence: people recognise potential efficiency gains, but remain uneasy about transparency, accuracy and accountability when AI systems are involved in news production.
Crucially, trust is conditional – audiences are more accepting of AI when its use is clearly disclosed and remains under human editorial control.
This tension is not new, but digital transformation has amplified it. The Council of Europe has warned that Europe’s news media sector has been “profoundly transformed by digital technology”, reshaping both business models and the relationship between journalists and the public.
The Council also notes that digitalisation has weakened traditional gatekeeping functions while increasing exposure to manipulated, low-quality or misleading content, placing new responsibility on professional journalism as a democratic safeguard.
Dismantling the “misinfo bomb”
Commentary from within the media ecosystem suggests a growing realism about the scale of the challenge. Charlie Beckett, director of the JournalismAI project at the London School of Economics, argues that the “misinformation bomb” can no longer be defused by fact-checking alone.
Instead, journalism must adapt to an environment where falsehoods, half-truths and context manipulation coexist with accurate reporting, often competing for attention on the same platforms. The argument is not to normalise misinformation, but to recognise that resilience – rather than eradication – may be the more realistic objective.
This shift is closely linked to how AI is changing the journalist-audience relationship. Journalist and academic Margaret Simons highlights that AI tools are altering not just newsroom workflows, but audience expectations about speed, personalisation and authority.
As automation becomes more visible, she argues, journalism risks losing legitimacy if audiences are unclear about who – or what – is responsible for the information they consume.
Keeping up with change
Taken together, these sources suggest a clear outlook for 2026. Disinformation will not disappear, nor will AI use in journalism slow down.
The central question is whether Europe can strengthen trust infrastructure – combining transparent AI governance, resilient journalism and informed audiences – quickly enough to keep pace with technological change.
From an ethical standpoint, the stakes are democratic. “Trust in media is ethically crucial because it underpins the functioning of democracy,” Gürtler says. “When AI use in journalism undermines that trust, we need to find ways to rebuild it quickly.”
Trust is no longer a downstream effect of accurate information alone. It is shaped upstream, by how technologies are deployed, explained and governed within journalism itself. In that sense, the battle against disinformation in 2026 may be decided less by detecting falsehoods than by rebuilding confidence in the systems designed to inform the public.
(BM)


