Comparative Analysis of Gender Bias in Text-Based and Audio-Based NLP Models: Insights from Asian Linguistic and Cultural Contexts – American Journal of Student Research

American Journal of Student Research

Comparative Analysis of Gender Bias in Text-Based and Audio-Based NLP Models: Insights from Asian Linguistic and Cultural Contexts

Publication Date : Oct-25-2024

DOI: 10.70251/HYJR2348.23161171


Author(s) :

Anika Pallapothu.


Volume/Issue :
Volume 2
,
Issue 3
(Oct - 2024)



Abstract :

This study examines gender biases in Natural Language Processing (NLP) models, focusing on text-based and audio-based systems within Asian linguistic and cultural contexts. It highlights how gender roles and cultural norms in Asian backgrounds influence these biases, using examples like Google Translate, Siri, and Alexa. The research focuses on analyzing datasets that reflect Asian languages and cultural norms, examining how gender roles, stereotypes, and historical patterns manifest in NLP models. The study employed comprehensive strategies, including analyzing word embeddings and model outputs. This helps identify stereotypes linking gender to certain professional traits, particularly in text-based models. It also examines the performance of audio-based NLP models in speech recognition, voice commands, and interpretation, highlighting accuracy issues, especially for profiles that deviate from standard demographics in the training data. The study analyzes word embeddings and model outputs to identify gender-related stereotypes in professional traits, highlighting persistent biases, especially in speech recognition models with lower accuracy for non-standard demographics. The findings suggest the need for strategies to curb biases and ensure equitable NLP outcomes that promote inclusion and diversity among users. The research is vital for NLP developers, scholars, and AI teams, as it explores text- and audio-based models, revealing findings that help reduce biases and promote equity in AI language systems.