top of page
Cerca

Breaking the Code:

Lucrezia Sala

Feminist Perspectives on Addressing Gender Bias, Discrimination, and Sexualization in Generative AI


At the heart of the digital age, Artificial Intelligence stands as a shining beacon of innovation and possibility, challenging the limits of human knowledge. Nonetheless, artificial intelligence is a double-edged sword: on the one hand, it promises extraordinary progress; on the other, it forces us to confront unprecedented risks and ethical dilemmas. In particular, one of the many explications of AI has recently been in the spotlight: Generative AI. In this article, I will write an in-depth analysis deepening the risks of Generative AI, exploring solutions based on feminist theories to address challenges related to gender equality and discrimination in Artificial Intelligence.

 

What is Generative AI?

Through algorithmic creativity, Generative AI can produce new, seemingly original, and unique material in response to user commands or requests, such as photos, music, or writing (Generative AI: The Big Questions, nd).

One of the most well-known examples of generative AI systems is the GPT (Generative Pre-trained Transformer) series, which uses a large language model (LLM) (Baxter & Schlesinger, 2023) to comprehend text prompts and, in a tool like ChatGPT, produce natural language text in response. They are frequently trained on enormous volumes of data, allowing for greater complexity and more cogent and context-sensitive replies.

 

The Risk of Generative AI

At the AI For Good Global Summit in July 2023, Antonio Guterres, Secretary-General of the United Nations, has said “AI charts a course that benefits humanity and boosts our shared values” (United Nations, 2023). However, the accuracy and ethical use of Generative Artificial Intelligence both pose significant potential problems. The first and foremost problem that preoccupies feminists all around the globe is that, as of now, Generative AI Models might reinforce gender biases and stereotypes that already exist in the data they are trained on. For instance, chatbots like Microsoft's Tay (Vincent, 2016) and Facebook's Blenderbot (Robinson, 2022) have been known to create disrespectful, sexist, and racist content after picking up on negative online encounters. In addition, Stable Diffusion provided images of men when prompted to generate images of a “doctor”, while a prompt for “nurse” generated images of women (Tan, 2023). Systems like ChatGPT are problematic, according to Melanie Mitchell, an artificial intelligence researcher at the Santa Fe Institute, as they are “making massive statistical associations among words and phrases. When they start generating new language, they rely on those associations to generate the language, which itself can be biased in racist, sexist and other ways.”(Alba, 2022). A remarkable body of work on ethical AI practices has already been published by a group of AI researchers, including persons like Timnit Gebru and Abeba Birhane, who discovered more than 1,750 photographs that had the n-word attached to them in 80 Million Tiny Images (Prabhu & Birhane, 2020). “In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity, we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality,” they wrote. According to the Register (Quach, 2020), the dataset labeled Black and Asian people with racist slurs, women holding children labeled as whores, and included pornographic images. Another of the biggest concerns about generative AI is its capacity to generate disinformation through deepfakes, which, given how easily visual content can deceive us, is especially concerning.

 

Solutions proposed through the lens of Feminism

Can AI become a driver of equity, justice, and diversity through feminism?

"For an inclusive digital world: innovation and technology for gender equality" was the 2023 International Women’s Day theme. The idea behind it was that we now have new methods and resources to put men and women on an even playing field in the workplace, society, and public spaces thanks to digital technology and innovation, offering women greater freedom and safer access to networks, new opportunities, and a more sustainable development for all. Similarly, the UNDP’s “Digital Imaginings: Women’s CampAIgn for Equality” employed AI-generated art to portray a world where women have more opportunities and power.

According to Eva Gengler, a researcher and entrepreneur in the field of feminist artificial intelligence, co-founder of enableYou, and co-founder of FemAI, if we combine the transformative power of feminism and the revolutionary force of AI, nothing could stop us from eliminating injustice (Gengler, nd). In her speech, she argues that we, as humans, do not like to have biases, and we have a really hard time recognizing them ourselves, whereby we are better at detecting them in AI models’ outputs. Therefore, she suggests we use this to actually see the biases in ourselves and unlearn them, rather than exacerbating their hardships, so that we can empower marginalized people, report hate speech, recruit based on potential, and bring education to ethnic minorities.

So, ultimately, she believes that we are at a turning point in the sense that AI can be the tool to intensify justice and push the analog world towards greater equity, but it may also be the feminist tool to challenge and change the power imbalances both in the digital and analog world. She calls this Feminist AI a goal, a value construct, and an approach to reach more equity with and within AI. As mentioned before, Eva Gengler is also the founder of FemAI, a Think Tank on Feminist Artificial Intelligence to lift the potential of AI to create equality and a better life for all. It particularly focuses on the role of women, LGBTIQ+, and other marginalized and underrepresented groups regardless of gender, age, religion and belief, disability, sexual identity, ethnicity, and appearance. FemAI has provided a vision for the EU AI Act[1], ensuring that AI development is gender-responsive, equitable, and just. It calls for several actions within it, such as classifying General Purpose AI as high-risk, identifying prohibited areas of application, closing loopholes for high-risk AI, advocating for prioritizing inclusive data sets, and ensuring transparency and accountability through the AI Office.

In an interview, Brazilian digital rights and data justice activist Luísa Franco Machado, 23 years old, chosen to be one of the 17 Young Leaders of the United Nations (UN) for the Sustainable Development Goals (SDGs), sustained that AI has the potential to change the actual landscape and promote Feminism (How Can Digital Rights Promote Gender Equality?, 2023). Her suggestion is to guarantee that more women and genderqueer people are in the management of large digital platforms and in teams developing codes and LLM of generative AI models. Access to technology can improve access to education, particularly for women who live in remote areas, are single mothers, or are the parents of large families and have a lot of responsibilities. There are contents, services, and applications created from women to women when more women are involved in developing technologies. This makes it easier to communicate ideas that go beyond stereotypes, like gender information and obstetric violence, among other things. Additionally, it aids in overcoming the sexism that many algorithms and codes reproduce.

 

Conclusions

“We Should All Be Feminist” is not only what Chimamanda Ngozi Adichie proclaimed at TEDxEuston in December 2012 (Adichie, 2012) but is a belief that rings true for people of all genders, sexual preferences, and ethnic identities, as true feminism is intersectional. It was thanks to feminist activism and many unnamed colleagues and friends who may or may not have considered themselves feminists who provided for community and support to women that brought women’s issues to the forefront of political governments. In the light of the potential of generative AI, the call is clear: AI can be a force for good, inclusivity, justice, and equality, as Generative AI not only generates contents, but also could create a better, brighter, improved future for everyone. It just needs to become a feminist.

 

 

 

Bibliografia

Generative AI: The big questions. (n.d.). Clifford Chance. Accessed December 1, 2023, from https://www.cliffordchance.com/insights/thought_leadership/ai-and-tech/generative-ai-the-big-questions.html

Baxter, K., & Schlesinger, Y. (2023, June 6). Managing the Risks of Generative AI. Harvard Business Review. https://hbr.org/2023/06/managing-the-risks-of-generative-ai

Urging Creation of Guardrails, Secretary-General Tells AI for Good Summit Artificial Intelligence Must Benefit Everyone, Be Used to Drive Sustainable Development | UN Press. (n.d.). Press.un.org. https://press.un.org/en/2023/sgsm21864.doc.htm

Vincent, J. (2016, March 24). Twitter Taught Microsoft’s AI Chatbot to Be a Racist in Less than a Day. The Verge; The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Robinson, B. (2022). Facebook’s AI chatbot turned racist extremely quickly | indy100. Www.indy100.com. https://www.indy100.com/viral/facebook-ai-chatbot-racism

Tan, J. (2023). Singapore releases discussion paper on the risks associated with Generative Artificial Intelligence. Dentons.rodyk.com. https://dentons.rodyk.com/en/insights/alerts/2023/july/6/singapore-releases-discussion-paper-on-the-risks-associated-with-generative-artificial-intelligence

Alba, D. (2022, December 8). OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails. Bloomberg.com. https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results

Prabhu, V., & Birhane, A. (2020). LARGE DATASETS: A PYRRHIC WIN FOR COMPUTER VISION?https://arxiv.org/pdf/2006.16923.pdf

Quach, K. (2020). MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. Www.theregister.com. https://www.theregister.com/2020/07/01/mit_dataset_removed/

Gengler, E. (n.d.). Feminism - For more Equity in AI | Eva Gengler | TEDxCBS Cologne. Www.youtube.com. https://www.youtube.com/watch?v=CxcCwvut50A

Adichie, C. N. (2012, December). We should all be feminists. Ted.com; TED Talks. https://www.ted.com/talks/chimamanda_ngozi_adichie_we_should_all_be_feminists


 

6 visualizzazioni0 commenti

Post recenti

Mostra tutti

Comments


Modulo di iscrizione

Il tuo modulo è stato inviato!

©2021 di Bocconi-students International Law Society. Creato con Wix.com

bottom of page