Current NLP systems largely rely on single “gold” labels, overlooking the diversity of human perspectives inherent in language interpretation. This simplification obscures demographic and sociocultural nuances and limits the ability of models to represent minority viewpoints. In this project, we investigate how multiple perspectives can be systematically integrated across the NLP pipeline, from data collection and modeling to evaluation and explanation.
We examine the requirements for perspectivist corpus design, including annotator diversity, disaggregated labeling, and the role of annotation paradigms in shaping perspective representation. Furthermore, we explore modeling approaches that incorporate user and demographic information, and assess how disagreement can be distinguished from meaningful variation in viewpoints. With the rise of Large Language Models, we also study whether persona-based prompting can simulate human-like annotations and to what extent such generated perspectives align with real annotator behavior.
Finally, we address the challenge of evaluating perspectivist systems, proposing alternatives to majority-based metrics that better capture individual viewpoints, and investigate how explanations can complement labels to more fully represent perspectives. Our findings aim to advance the development of NLP systems that more accurately reflect the diversity of human interpretation.
@inproceedings{sarumi-etal-2026-fine,title={Fine-Grained Perspectives: Modeling Explanations through Annotator-Specific Rationales},author={Sarumi, Olufunke O. and Welch, Charles and Braun, Daniel},booktitle={Proceedings of the The 5th Workshop on Perspectivist Approaches to NLP},year={2026},publisher={[to be published]},doi={10.48550/arXiv.2604.21667},url={https://arxiv.org/abs/2604.21667}}
Recent works in Natural Language Processing have focused on developing methods to model annotator perspectives within subjective datasets, aiming to capture opinion diversity. This has led to the development of various approaches that learn from disaggregated labels, leading to the question of what factors most influence the performance of these models. While dataset characteristics are a critical factor, the choice of evaluation metric is equally crucial, especially given the fluid and evolving concept of perspectivism. A model considered state-of-the-art under one evaluation scheme may not maintain its top-tier status when assessed with a different set of metrics, highlighting a potential challenge between model performance and the evaluation framework. This paper presents a performance analysis of annotator modeling approaches using the evaluation metrics of the 2025 Learning With Disagreement (LeWiDi) shared task and additional metrics. We evaluate five annotator-aware models under the same configurations. Our findings demonstrate a significant metric-induced shift in model rankings. Across four datasets, no single annotator modeling approach consistently outperformed others using a single metric, revealing that the “best” model is highly dependent on the chosen evaluation metric. This study systematically shows that evaluation metrics are not agnostic in the context of perspectivist model assessment.
@inproceedings{sarumi-etal-2025-nlp,title={{NLP}-{R}es{T}eam at {L}e{W}i{D}i-2025:Performance Shifts in Perspective Aware Models based on Evaluation Metrics},author={Sarumi, Olufunke O. and Welch, Charles and Braun, Daniel},editor={Abercrombie, Gavin and Basile, Valerio and Frenda, Simona and Tonelli, Sara and Dudy, Shiran},booktitle={Proceedings of the The 4th Workshop on Perspectivist Approaches to NLP},month=nov,year={2025},address={Suzhou, China},publisher={Association for Computational Linguistics},url={https://aclanthology.org/2025.nlperspectives-1.19/},doi={10.18653/v1/2025.nlperspectives-1.19},pages={219--227},isbn={979-8-89176-350-0},}
Olufunke O.
Sarumi, Charles
Welch, Daniel
Braun, and
1 more author
In Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP-2025). Association for Computational Linguistics, 2025
In this work, we explore the capability of Large Language Models (LLMs) to annotate hate speech and abusiveness while considering predefined annotator personas within the strong-to-weak data perspectivism spectra. We evaluated LLM-generated annotations against existing annotator modeling techniques for perspective modeling. Our findings show that LLMs selectively use demographic attributes from the personas. We identified prototypical annotators, with persona features that show varying degrees of alignment with the original human annotators. Within the data perspectivism paradigm, annotator modeling techniques that do not explicitly rely on annotator information performed better under weak data perspectivism compared to both strong data perspectivism and human annotations, suggesting LLM-generated views tend towards aggregation despite subjective prompting. However, for more personalized datasets tailored to strong perspectivism, the performance of LLM annotator modeling approached, but did not exceed, human annotators.
@inproceedings{sarumi-etal-2025-impact,title={The Impact of Annotator Personas on {LLM} Behavior Across the Perspectivism Spectrum},author={Sarumi, Olufunke O. and Welch, Charles and Braun, Daniel and Schl{\"o}tterer, J{\"o}rg},editor={Abbas, Mourad and Yousef, Tariq and Galke, Lukas},booktitle={Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP-2025)},month=aug,year={2025},address={Southern Denmark University, Odense, Denmark},publisher={Association for Computational Linguistics},url={https://aclanthology.org/2025.icnlsp-1.14/},pages={121--136},doi={10.48550/arXiv.2508.17164},}