Can LLMs Assist Annotators in Identifying Morality Frames? - Case Study on Vaccination Debate on Social Media

Tunazzina Islam, Dan Goldwasser. Preprint 2024.

Abstract

In the digital age, social media platforms have become central to public discourse, particularly on polarizing issues like vaccination. These discussions are often steeped in diverse moral perspectives, influencing both individual opinions and public policy. In the realm of natural language processing (NLP), there is often a scarcity of data, particularly when addressing complex psycho-linguistic concepts (i.e., morality frames) that require specialized knowledge. This challenge intensifies when identifying nuanced concepts with limited data. Furthermore, relying solely on human annotators for this complex task is costly, time-consuming, and often results in variances in annotation quality due to cognitive load. To address these issues, we leverage Large Language Models (LLMs), which are adept at adapting to new tasks through few-shot learning, utilizing only a handful of in-context examples coupled with explanations that connect examples to task principles. Our research investigates the potential of LLMs to assist annotators in intricate psycho-linguistic tasks. We approach this inquiry through a two-step process: (1) generating necessary concepts and explanations with LLMs, and (2) assessing those concepts utilizing the explanations provided by LLMs. In our work, we focus on identifying morality frames within the context of vaccination debates on social media. In step one, we employ LLMs’ few-shot prompting enhanced with explanations. In step two, we provide human evaluation by incorporating a “think-aloud” tool, encouraging active task participation, and gathering feedback. In a comparative study with participants, we demonstrate that by utilizing few-shot prompting with explanations, LLMs can identify morality frames related to vaccination debates on social media with higher accuracy. Incorporating these LLMs-generated answers and explanations into the annotation process significantly streamlines the morality frame identification task, reduces task difficulty, and lowers annotators’ cognitive load. Our findings suggest that LLMs can serve as effective collaborators in complex psycho-linguistic tasks, offering a promising direction for future research in human-AI collaboration.