Media and technology scholar Belinda Amartey has warned that the growing use of feminine voices and personalities in public-facing artificial intelligence systems risks entrenching harmful gender stereotypes.
She said the trend of feminizing AI in virtual assistants, automated customer service platforms, and chatbots reinforces long-standing cultural associations between women and care, obedience, and emotional labor.
According to her, many AI systems are intentionally built to sound friendly, patient, and accommodating -traits she argues are not neutral. “These systems are presented as friendly, patient, and endlessly accommodating,” Amartey said. “But those traits are not neutral. They reflect cultural ideas about femininity and care historically attached to women’s labor.”
She warned that repeated interaction with compliant, feminized systems can normalize unequal power relations and reinforce expectations that emotional responsiveness should be freely available.
Amartey noted that the implications are significant in sectors such as banking, healthcare, and public services, where AI is increasingly used to manage complaints and sensitive interactions.
“In these spaces, AI does more than provide information,” she said. “It manages emotion and de-escalates tension. That mirrors work long feminized and undervalued.”
She also cautioned that feminized AI may increase user trust, encouraging people to share personal data without fully understanding how it is collected or used.
Amartey, a doctoral researcher at the School of Arts, Technology, and Emerging Communication at the University of Texas at Dallas, called for greater scrutiny of gendered design choices in AI.
“Ethical AI requires more than correcting data bias,” she said. “We must question the social values being built into these systems.”

