Social Biases in AI-Generated Creative Texts: A Mixed-Methods Approach in the Spanish Context
Social Biases in AI-Generated Creative Texts: A Mixed-Methods Approach in the Spanish Context
Blog Article
This study addresses the biases in artificial intelligence (AI) when generating creative content, a growing challenge due to the widespread adoption of these technologies in creating automated narratives.Biases in AI reflect and amplify social inequalities.They perpetuate stereotypes and limit diverse representation in the generated outputs.Through an experimental approach with ChatGPT-4, biases related to age, gender, sexual orientation, ethnicity, religion, physical appearance, and socio-economic status, are analyzed in AI-generated stories about successful individuals in the Zinc context of Spain.The results reveal an overrepresentation of young, heterosexual, and Hispanic characters, alongside a marked underrepresentation of diverse groups such as older individuals, ethnic minorities, and characters with varied socio-economic backgrounds.
These findings validate the hypothesis that AI systems replicate and amplify the biases present in their training data.This process reinforces social inequalities.To mitigate these effects, the study suggests solutions such as diversifying training datasets and GLOVES conducting regular ethical audits, with the aim of fostering more inclusive AI systems.These measures seek to ensure that AI technologies fairly represent human diversity and contribute to a more equitable society.