skip to primary navigationskip to content
 

Technology, Gender and Intersectionality Research Project

Nothing in a face ©Suture Blue

Nothing in a face ©Suture Blue

The University of Cambridge Centre for Gender Studies, in collaboration with the Leverhulme Centre for the Future of Intelligence (CFI) is running a two-year project on Technology, Gender and Intersectionality. The project is headed by Professor Jude Browne (PI) and Dr Stephen Cave and includes the Christina Gaw Post-doctoral Research Associates, Dr Eleanor Drage and Kerry Mackereth. Together they bring a feminist, intersectional and anti-racist perspective to artificial intelligence (AI).

The Technology, Gender and Intersectionality Research Project, generously funded by Christina Gaw, works to develop innovative frameworks and methodologies that support better, fairer and more equitable AI. It aims to bridge the gap that exists between theoretical research on AI’s gendered impact and technological innovation in the AI sector.

The project brings these two spheres together through collaborative relationships with industry partners that are actively pursuing the goal of mitigating and eradicating unfair bias from their AI products and working towards creating diverse and equitable workplace cultures. In this sense, the Technology, Gender and Intersectionality project is a cutting-edge collaboration between academia and industry premised on the mutual exchange and development of ideas, knowledge and products. The project translates scholarship into practical knowledge for industry, while also allowing industry approaches to inform academic work in the field of Gender Studies. It ultimately aims to provide the AI sector with practical tools for creating more equitable AI informed by intersectional feminist knowledge.

Our Focus on AI

As AI becomes increasingly prevalent in society, there is a clearly demonstrated need to analyse the challenges posed by AI and how AI may differentially affect individuals along the lines of social and political factors including (but not limited to) gender, race, class, age and ability. AI is perceived as a neutral, unbiased tool that makes fairer and more equitable decisions than human beings. Yet, AI can replicate and intensify the political and sociocultural conditions and power relations within which it is embedded. Hence, attempts to use AI to address inequality might actually exacerbate the inequality it attempts to solve. The project considers how AI may entrench or accentuate existing inequalities, as well as who makes AI and how the demographics of the AI workforce may affect the design and output of AI technologies.

The Technology, Gender and Intersectionality project also considers the ability of AI to help address human biases and intersecting forms of social harm. The past five years has seen a number of high-profile failures of AI to perform tasks in a fair and unbiased way. In these cases, AI has exacerbated and attenuated human biases, drawing attention to the way in which social prejudices and assumptions are embedded in technology. In order to create more equitable outcomes, processes which result in unfairly biased outputs must be made visible. The project hopes to assist industry partners in drawing out and benefitting from AI’s immense potential while simultaneously mitigating against AI’s potential harms.

AI Through an Intersectional Lens

The Technology, Gender and Intersectionality project enriches existing scholarship on the social and political impact of AI through its intersectional lens. Intersectionality, a concept that emanated from black feminist thought and is now widely employed in a diversity of contexts, insists that forms of discrimination and oppression can only be understood in relation to one another. In intersectional thought, categories such as gender, race, and class (among many others) are not experienced as discrete entities; instead, these axes of power work through and alongside one another. Crucially, intersectionality illuminates how intersecting forms of domination produce harms that are not just additive, but rather constitute more than the sum of their parts.

By bringing an intersectional lens to the field of AI, the Technology, Gender and Intersectionality team illuminates how the gendered impact of AI must be considered alongside other overlapping and intersecting forms of harm. The project uses an intersectional lens to thoroughly and accurately demonstrate the complexity of how bias is embedded and reproduced in AI systems. For example, the Technology, Gender and Intersectionality research team investigates how gender and race interact in AI processes and systems, and how developers and users of AI technologies conceive of gender and race. They consider how AI practitioners grapple with complex intersectional identity categories, and how these identities are coded into AI technologies and algorithms. The Technology, Gender and Intersectionality team uses gender and critical race theory to bring new insights to these questions of categorisation and quantification in the realm of AI.

The project team is a multidisciplinary research collective that brings different theoretical perspectives to bear on issues relating to gender and AI. Eleanor Drage specialises in contemporary feminist, anti-racist, posthumanist and queer theory and their practical relevance to the interrogation and amelioration of technical systems. Kerry Mackereth examines histories of gendered and racialised violence and considers how contemporary AI may reproduce or legitimise these histories of violence. Together, the Technology, Gender and Intersectionality research team contribute their collective knowledge of how intersectional gender studies advances our understanding of the relationship between gender, race and new and emerging technologies.

Upcoming events

Artificial Intelligence and Unfair Bias: Addressing Gendered and Racialised Inequalities in AI, Cambridge Festival for the University of Cambridge, Monday 29 March 2021

In March 2021, the UCCGS Post-doctoral Researchers, Dr. Eleanor Drage and Kerry Mackereth, will be running a workshop at the Cambridge Festival for the University of Cambridge, which replaces the Cambridge Festival of Ideas and the Cambridge Science Festival. The workshop will consist of a 20-minute presentation about unfair bias and AI, after which participants will be invited to contribute their ideas and feedback on bias in AI systems and the potential of AI to support equality initiatives. It will aim to break down some of the key ethical issues surrounding artificial intelligence, gender, race, and bias, and identify some of the conditions and practices that lead to the development of biased AI, such as the demographics of the AI workforce and the paucity of tangible measures used to address bias in AI production processes. It will show how biased AI results in real-world harms through the examples of facial recognition and search algorithms. The workshop will then examine some steps forward for thinking about policies that the AI sector could implement to address bias in AI and work towards ethical, human-focused AI produced for social good. Finally, the workshop will consider whether artificial intelligence can help us address racist and sexist biases, make people more aware of these biases, and perhaps even complicate systems of race and gender.

Thank you to Suture Blue for their generosity in allowing us to display their wonderful images at our website.