LLMs Outperform Outsourced Human Coders on Complex Textual Analysis
33 Pages Posted: 23 Dec 2024 Last revised: 24 Dec 2024
Date Written: November 13, 2024
Abstract
This paper evaluates the effectiveness of large language models (LLMs) in extracting complex information from textual sources. We compare the performance of various LLMs with that of outsourced human coders on five natural language processing tasks, ranging from named entity recognition to identifying nuanced political criticism in news articles. Using a corpus of Spanish news articles, we find that LLMs consistently outperform outsourced human coders, especially in tasks requiring deep contextual understanding. These findings suggest that current LLM technology provides researchers without programming expertise a cost-effective alternative for sophisticated text analysis.
Keywords: Large Language Models, Text Analysis, Human Annotation, Natural Language Processing, News, Media
Suggested Citation: Suggested Citation