When the source is a bot: How people adapt their evaluation strategies to assess AI-generated content

Scritto il 30/03/2026
da Shakked Dabran-Zivan

PLoS One. 2026 Mar 30;21(3):e0345300. doi: 10.1371/journal.pone.0345300. eCollection 2026.

ABSTRACT

Generative artificial intelligence (GenAI) blurs the boundaries between expert and non-expert sources, as it increasingly distributes and creates scientific content. This study examines how individuals adapt evaluation strategies, including content and source evaluation, and corroboration, when using GenAI versus a search engine. Based on performance tasks in which participants evaluated science-related socio-scientific dilemmas and follow-up interviews with 30 adult participants from diverse educational backgrounds, findings reveal that users employed these strategies on both platforms but adapted them in distinct ways. We identified two evaluation strategies that emerged as analytical constructs from the qualitative data. First, to corroborate output, participants frequently used a strategy we titled 'representation evaluation,' assessing whether GenAI accurately summarized its sources rather than verifying source agreement independently. Second, participants also applied 'meta source evaluation,' relying on their familiarity with sources provided by GenAI instead of directly evaluating the sources themselves. Although all participants engaged in dialogue with the chat, they did not leverage the bot's dialogue capabilities to assess credibility, and many relied on a "machine heuristic", assuming GenAI's inherent correctness, reflecting a well-documented over-trust in automated systems. This research underscores the importance of developing and assessing critical evaluation skills for navigating AI-generated scientific information. Specifically, it extends existing models of online information evaluation to contexts mediated by artificial intelligence.

PMID:41911240 | DOI:10.1371/journal.pone.0345300