skip to primary navigationskip to content
 

Technology, Gender and Intersectionality Research Project

Nothing in a face ©Suture Blue

Nothing in a face ©Suture Blue

The University of Cambridge Centre for Gender Studies, in collaboration with the Leverhulme Centre for the Future of Intelligence (CFI) is running a two-year project on Technology, Gender and Intersectionality. The project is headed by Professor Jude Browne (PI) and Dr Stephen Cave and includes the Christina Gaw Post-doctoral Research Associates, Dr Eleanor Drage and Kerry Mackereth. Together they bring a feminist, intersectional and anti-racist perspective to artificial intelligence (AI).

The Technology, Gender and Intersectionality Research Project, generously funded by Christina Gaw, works to develop innovative frameworks and methodologies that support better, fairer and more equitable AI. It aims to bridge the gap that exists between theoretical research on AI’s gendered impact and technological innovation in the AI sector.

The project brings these two spheres together through collaborative relationships with industry partners that are actively pursuing the goal of mitigating and eradicating unfair bias from their AI products and working towards creating diverse and equitable workplace cultures. In this sense, the Technology, Gender and Intersectionality project is a cutting-edge collaboration between academia and industry premised on the mutual exchange and development of ideas, knowledge and products. The project translates scholarship into practical knowledge for industry, while also allowing industry approaches to inform academic work in the field of Gender Studies. It ultimately aims to provide the AI sector with practical tools for creating more equitable AI informed by intersectional feminist knowledge.

Our Focus on AI

As AI becomes increasingly prevalent in society, there is a clearly demonstrated need to analyse the challenges posed by AI and how AI may differentially affect individuals along the lines of social and political factors including (but not limited to) gender, race, class, age and ability. AI is perceived as a neutral, unbiased tool that makes fairer and more equitable decisions than human beings. Yet, AI can replicate and intensify the political and sociocultural conditions and power relations within which it is embedded. Hence, attempts to use AI to address inequality might actually exacerbate the inequality it attempts to solve. The project considers how AI may entrench or accentuate existing inequalities, as well as who makes AI and how the demographics of the AI workforce may affect the design and output of AI technologies.

The Technology, Gender and Intersectionality project also considers the ability of AI to help address human biases and intersecting forms of social harm. The past five years has seen a number of high-profile failures of AI to perform tasks in a fair and unbiased way. In these cases, AI has exacerbated and attenuated human biases, drawing attention to the way in which social prejudices and assumptions are embedded in technology. In order to create more equitable outcomes, processes which result in unfairly biased outputs must be made visible. The project hopes to assist industry partners in drawing out and benefitting from AI’s immense potential while simultaneously mitigating against AI’s potential harms.

AI Through an Intersectional Lens

The Technology, Gender and Intersectionality project enriches existing scholarship on the social and political impact of AI through its intersectional lens. Intersectionality, a concept that emanated from black feminist thought and is now widely employed in a diversity of contexts, insists that forms of discrimination and oppression can only be understood in relation to one another. In intersectional thought, categories such as gender, race, and class (among many others) are not experienced as discrete entities; instead, these axes of power work through and alongside one another. Crucially, intersectionality illuminates how intersecting forms of domination produce harms that are not just additive, but rather constitute more than the sum of their parts.

By bringing an intersectional lens to the field of AI, the Technology, Gender and Intersectionality team illuminates how the gendered impact of AI must be considered alongside other overlapping and intersecting forms of harm. The project uses an intersectional lens to thoroughly and accurately demonstrate the complexity of how bias is embedded and reproduced in AI systems. For example, the Technology, Gender and Intersectionality research team investigates how gender and race interact in AI processes and systems, and how developers and users of AI technologies conceive of gender and race. They consider how AI practitioners grapple with complex intersectional identity categories, and how these identities are coded into AI technologies and algorithms. The Technology, Gender and Intersectionality team uses gender and critical race theory to bring new insights to these questions of categorisation and quantification in the realm of AI.

The project team is a multidisciplinary research collective that brings different theoretical perspectives to bear on issues relating to gender and AI. Eleanor Drage specialises in contemporary feminist, anti-racist, posthumanist and queer theory and their practical relevance to the interrogation and amelioration of technical systems. Kerry Mackereth examines histories of gendered and racialised violence and considers how contemporary AI may reproduce or legitimise these histories of violence. Together, the Technology, Gender and Intersectionality research team contribute their collective knowledge of how intersectional gender studies advances our understanding of the relationship between gender, race and new and emerging technologies.

Upcoming events

Violence, Reproduction and Racialisation in the History of Medicine, Race and Technology Reading Group, Tuesday 23 February 2021

On February 23rd, Kerry Mackereth will lead a session on Violence, Reproduction and Racialisation in the History of Medicine for the Race and Technology Reading Group. This seminar will explore how the racialised body has historically been a site of invasive and non-consensual experimentation in the name of modern medicine, through a reading of Dorothy Roberts’ Killing the Black Body: Race, Reproduction, and the Meaning of Liberty (1997). It will also highlight how medicalised forms of knowledge have played a central role in producing racial hierarchies and racialised identity categories.

Technology, Gender and Intersectionality, St. John’s College Feminist Society, Thursday 25 February 2021

On February 25th, Kerry Mackereth is giving a talk to the St. John’s College Feminist Society on Technology, Gender and Intersectionality. In this presentation Kerry will give an overview of the project and outline its aims and hypotheses. She will demonstrate the need for an intersectional approach to artificial intelligence, and explore the existing intersectional work being done in the field of AI. She will specifically focus on the ways in which contemporary AI technologies may reproduce histories of gendered and racialised violence, exploring this issue through the example of scientific racism and facial recognition.

International Women’s Day, Monday 8 March 2021

On March 8th, Kerry and Eleanor will be speaking alongside Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence, at an online International Women’s Day Event organised by the University of Cambridge Information Services (programme tbc). 

Care Work and Technology, Gender and Working Lives Reading Group, Tuesday 9 March 2021

On March 9th, Kerry Mackereth is convening a session of the Gender and Working Lives Reading Group on Care Work and Technology, focusing on Alexa, Alert Me When the Revolution Comes: Gender, Affect, and Labor in the Age of Home-Based Artificial Intelligence by Amy Schiller and John McMahon, read alongside Surrogate Humanity by Neda Atanasoski and Kalindi Vora. The reading group will be discussing the relationship between care labour, gender, race and AI, interrogating technologies that purport to perform caring and reproductive labour on humankind’s behalf. In particular, they will focus on smart home systems like Amazon Alexa and home management software applications like Hello Alfred. The main purpose of the session is to examine how technology is framed as a way of emancipating (white) women from arduous care labour and reproductive labour, and how these technologies simultaneously reproduce gendered and racialised relations of power in technological form.

Science, Imperialism and Indigenous Epistemologies, Race and Technology Reading Group, Tuesday 9 March 2021

On March 9th, Eleanor Drage is leading a session on Science, Imperialism and Indigenous Epistemologies for the Race and Technology Reading Group. This session explores how Mātauranga Māori and other indigenous scientific methods can complement Western science, drawing on case studies from New Zealand, India, and Africa. On March 18th, Eleanor Drage and Kerry Mackereth have been invited to give a talk at the University of Edinburgh School of Languages, Literatures and Cultures, titled ‘Technology, Gender and Intersectionality’. Eleanor and Kerry will present the project and its methods to students and faculty members before opening a conversation on the role of intersectionality and gender studies in creating more equitable technological systems.

Artificial Intelligence and Unfair Bias: Addressing Gendered and Racialised Inequalities in AI, Cambridge Festival for the University of Cambridge, Monday 29 March 2021

In March 2021, the UCCGS Post-doctoral Researchers, Dr. Eleanor Drage and Kerry Mackereth, will be running a workshop at the Cambridge Festival for the University of Cambridge, which replaces the Cambridge Festival of Ideas and the Cambridge Science Festival. The workshop will consist of a 20-minute presentation about unfair bias and AI, after which participants will be invited to contribute their ideas and feedback on bias in AI systems and the potential of AI to support equality initiatives. It will aim to break down some of the key ethical issues surrounding artificial intelligence, gender, race, and bias, and identify some of the conditions and practices that lead to the development of biased AI, such as the demographics of the AI workforce and the paucity of tangible measures used to address bias in AI production processes. It will show how biased AI results in real-world harms through the examples of facial recognition and search algorithms. The workshop will then examine some steps forward for thinking about policies that the AI sector could implement to address bias in AI and work towards ethical, human-focused AI produced for social good. Finally, the workshop will consider whether artificial intelligence can help us address racist and sexist biases, make people more aware of these biases, and perhaps even complicate systems of race and gender.

Thank you to Suture Blue for their generosity in allowing us to display their wonderful images at our website.