SemRel2024: A Collection of Semantic Textual Relatedness Datasets for 14 Languages

N Ousidhoum, SH Muhammad, M Abdalla… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2402.08638, 2024arxiv.org
Exploring and quantifying semantic relatedness is central to representing language. It holds
significant implications across various NLP tasks, including offering insights into the
capabilities and performance of Large Language Models (LLMs). While earlier NLP
research primarily focused on semantic similarity, often within the English language context,
we instead investigate the broader phenomenon of semantic relatedness. In this paper, we
present SemRel, a new semantic relatedness dataset collection annotated by native …
Exploring and quantifying semantic relatedness is central to representing language. It holds significant implications across various NLP tasks, including offering insights into the capabilities and performance of Large Language Models (LLMs). While earlier NLP research primarily focused on semantic similarity, often within the English language context, we instead investigate the broader phenomenon of semantic relatedness. In this paper, we present SemRel, a new semantic relatedness dataset collection annotated by native speakers across 14 languages:Afrikaans, Algerian Arabic, Amharic, English, Hausa, Hindi, Indonesian, Kinyarwanda, Marathi, Moroccan Arabic, Modern Standard Arabic, Punjabi, Spanish, and Telugu. These languages originate from five distinct language families and are predominantly spoken in Africa and Asia -- regions characterised by a relatively limited availability of NLP resources. Each instance in the SemRel datasets is a sentence pair associated with a score that represents the degree of semantic textual relatedness between the two sentences. The scores are obtained using a comparative annotation framework. We describe the data collection and annotation processes, related challenges when building the datasets, and their impact and utility in NLP. We further report experiments for each language and across the different languages.
arxiv.org
Showing the best result for this search. See all results