top of page

Machine Versus Human: A Comparative Study on Satire Annotation

Tailor Ray Corley and Avashna Govender
24/02/2026

Satire is a notorious blind spot for Natural Language Processing (NLP) systems due to its embedded sarcasm, parody, and contextual clues; this pragmatic language often leaves the intended meaning separate from the literal one. In this study, we ask if the newest frontier of NLP, Large Language Models (LLMs), can compare to, or surpass, the abilities of human annotators in identifying satirical language. We source our assessed excerpts from theatrical texts, since they are considered the original form of satire, and because their longevity allows us to utilize texts from across three historical eras. We find that the LLMs significantly outperform the human annotators, indicating a great increase in the ability of NLP systems to identify satirical language. Our findings raise questions about why these models have gained this ability, and possible implications for their use.

 

Wilmington, Delaware, 19801

ISSN: 3070-3875

DOI: 10.65161

​

 

The Oxford Journal of Student Scholarship (ISSN: 3070-3875) is an independent publication and is not affiliated with, endorsed by, or connected to the University of Oxford or any of its colleges, departments, or programs.

 

© 2025 by the Oxford Journal of Student Scholarship 

 

bottom of page