Artificial Intelligence: The Debate over Assigning Gender Identities to Technological Beings
In the digital age, the debate about whether machines should have a gender identity has become a controversial topic. As AI becomes more humanlike, we unconsciously assign gender roles, revealing societal biases. This affects future generations' understanding of gender, power, and social interaction.
The Feminine Voice Revolts
Walk into any modern home, and you'll likely encounter digital assistants with a distinctly feminine voice. Siri, Alexa, and Cortana all default to female-sounding voices due to user preference. However, this isn't accidental; test users found male voices less appealing, while females best embodied "helpfulness, support, and trustworthiness" for Cortana[1]. Yet, it's important to note that historically, text-to-speech systems were primarily trained on female voices, making female AI voices technically easier to create[1]. Are we teaching our children that helpful voices should sound female?
When Machines Mimic Our Worst Habits
Studies show that users, particularly men, interrupt female-voiced AI assistants almost twice as often as women[2]. This mirrors real-world patterns where men interrupt women more often. Worse yet, children could be learning these patterns, inadvertently programming the social scripts of the next generation. The harassment problem runs even deeper. Female-voiced AI often responded to sexual comments with ambiguous or even thankful replies, teaching users they could treat AI abusively without consequence[2].
The Gender Projection Dilemma
People naturally tend to assign gender to machines based on interactions and actions. A robot performing construction work is perceived as male, while one organizing and cleaning is seen as female. The consequences are dire. Customer-facing service robots worldwide typically feature gendered names, voices, or appearances, creating a parallel social hierarchy with artificial beings taking traditionally gendered roles[1].
The Paradox of Authority
Interestingly, the gender of AI voices can vary based on their role. Lower-pitched, typically "masculine," voices are deemed more authoritative, influencing why automated voices on city streets and in subways are usually male[1]. In some countries, like Britain and Germany, gendered voices reflect societal biases, mirroring traditional servant roles and creating cross-border impacts on technology development.
The Bias in AI Decision-Making
AI systems can reflect the biases they were trained on, affecting decision-making. Research found that 44% of AI systems showed gender bias, with 25% exhibiting both gender and racial bias[3]. When prompted to write a story about a doctor and nurse, generative AI consistently made the doctor male and the nurse female[3]. If AI is built on yesterday's prejudices, what can we expect from tomorrow's decision-makers?
The Lack of Gender Perspectives
The underrepresentation of women in AI development exacerbates the bias. Women contribute to about 45% of AI publications but only 11% author solely female-authored publications, while 55% are written by men[4]. The gender expertise gap can perpetuate inequality for years to come[4]. Additionally, women are less likely to use AI, citing knowledge, concerns about privacy, and trust as barriers[4].
The Trust Deficit
Women are consistently more skeptical about AI, which could create a self-reinforcing feedback loop. Women are less likely to use AI daily or feel it significantly boosts productivity, indicating their needs are not being met in AI design[4]. Companies must strive for inclusivity to break this cycle, ensuring their creations serve all users, not just half.
A Change in the Future Sounds
Some researchers are pioneering gender-neutral AI voices, such as "Q" - the first genderless AI voice created by a coalition of activists and engineers[5]. Testing with 4,500 participants found that 50% perceived the voice as neutral, while 26% found it masculine and 24% feminine[5]. Additionally, gender-neutral AI tends to receive more respectful treatment, reducing interruptions and impolite interactions[5].
Embracing non-human AI identities can help avoid tricky gender questions, promoting respectful interactions and mitigating the risk of objectification or mistreatment. The future might sound different, but it's up to us to make it fair and inclusive.
[1]https://www.wired.com/story/amazon-geo-unveils-alea-the-real-deal-french-intelligence-ai/[2]https://www.vox.com/future-perfect/2020/3/4/21164057/feminist-futurism-self-driving-cars-robot-masculinity[3]https://www.nature.com/articles/s41467-021-25668-7[4]https://www.theguardian.com/technology/2021/jul/30/women-leave-tech-jobs-due-to-unfair-workplaces-and-scant-support-says-survey[5]https://www.bbc.com/future/article/20190917-can-a-genderless-ai-survey-help-us-move-past-stereotypes
- The evolution of AI is mirrored in the prevalence of feminine-sounding voices in digital assistants like Siri, Alexa, and Cortana, raising questions about the role of technology in reinforcing societal biases and potentially teaching future generations that helpful voices should be female.
- Artificial intelligence research reveals that AI systems, inspired by biased training data, can display gender bias, such as generative AI consistently depicting doctors as male and nurses as female. This perpetuates outdated stereotypes and raises concerns about the consequences for tomorrow's decision-makers.