top of page
< Back

Deceiving Plausibility: A Potential Policy Solution to the AI Hallucination Crisis

Raphael Ibrahim
15/12/2025

This paper delves into the effects of utilizing a watermark or label to elicit caution against artificial intelligence (AI) models’ potential for disinformation. It first discusses the various fields of psychological research and modeling that provide potential insight into the watermark’s overall effectiveness at swaying public initiative, then experiments utilizing a multi-faceted survey to observe the significance this potential solution holds for the disinformation crisis. The survey was provided to Californian residents and asked about various statements, ranging from claims about art and artists to sports. These claims are then handed out both independently and with the watermark in place to see whether or not increased levels of caution are elicited. Once the survey had terminated, the results were compiled into a regression table to analyze the statistical significance between the two groups. The paper then uses the analytical results to conclude that a watermark such as the one used in the survey significantly increases cautionary behavior of respondents with information that mimics the plausibility of AI hallucinations.

 

Wilmington, Delaware, 19801

ISSN: 3070-3875

DOI: 10.65161

 

The Oxford Journal of Student Scholarship (ISSN: 3070-3875) is an independent publication and is not affiliated with, endorsed by, or connected to the University of Oxford or any of its colleges, departments, or programs.

 

© 2025 by the Oxford Journal of Student Scholarship 

 

bottom of page