Promises and pitfalls of using LLMs to identify actor stances in political discourse
Authors
Viviane Walker, Mario Angst
Author
Keywords:
stance detection, machine learning, large language models
Abstract
Empirical research in the social sciences is often interested in understanding actor
stances; the positions that social actors take regarding normative statements in societal
discourse. In automated text analysis applications, the classification task of stance detec-
tion remains challenging. Stance detection is especially difficult due to semantic challenges
such as implicitness or missing context but also due to the general nature of the task. In
this paper, we explore the potential of Large Language Models (LLMs) to enable stance
detection in a generalized (non-domain, non-statement specific) form. Specifically, we
test a variety of different general prompt chains for zero-shot stance classifications. Our
evaluation data consists of textual data from a real-world empirical research project in the
domain of sustainable urban transport. For 1710 German newspaper paragraphs, each
containing an organizational entity, we annotated the stance of the entity toward one
of five normative statements. A comparison of four publicly available LLMs show that
they can improve upon existing approaches and achieve adequate performance. However,
results heavily depend on the prompt chain method, LLM, and vary by statement. Our
findings have implications for computational linguistics methodology and political dis-
course analysis, as they offer a deeper understanding of the strengths and weaknesses of
LLMs in performing the complex semantic task of stance detection. We strongly empha-
sise the necessity of domain-specific evaluation data for evaluating LLMs and considering
trade-offs between model complexity and performance