
Machine Versus Human: A Comparative Study on Satire Annotation
Tailor Ray Corley and Avashna Govender
24/02/2026
Satire is a notorious blind spot for Natural Language Processing (NLP) systems due to its embedded sarcasm, parody, and contextual clues; this pragmatic language often leaves the intended meaning separate from the literal one. In this study, we ask if the newest frontier of NLP, Large Language Models (LLMs), can compare to, or surpass, the abilities of human annotators in identifying satirical language. We source our assessed excerpts from theatrical texts, since they are considered the original form of satire, and because their longevity allows us to utilize texts from across three historical eras. We find that the LLMs significantly outperform the human annotators, indicating a great increase in the ability of NLP systems to identify satirical language. Our findings raise questions about why these models have gained this ability, and possible implications for their use.