skip to primary navigationskip to content
 

Technology, Gender and Intersectionality Research Project

© Donald Iain Smith-GettyThe Centre for Gender Studies, in collaboration with the Leverhulme Centre for the Future of Intelligence (CFI), is running a two-year project on Technology, Gender and Intersectionality. The project, headed by Professor Jude Browne and Dr Stephen Cave and carried out by postdoctoral researchers Dr Eleanor Drage and Kerry Mackereth, brings a feminist, intersectional and anti-racist perspective to AI and other emerging technologies. It aims to provide the AI sector with practical tools informed by intersectional feminist knowledge. Working with industry partners, we will develop innovative frameworks and methodologies that support better, fairer, and more equitable AI.

The project team is a multidisciplinary research collective that brings different theoretical perspectives to bear on issues relating to gender and AI. Eleanor Drage specialises in contemporary feminist, anti-racist, posthumanist and queer theory and their practical relevance to thinking about technology. Kerry Mackereth examines histories of gendered and racialised violence and considers how contemporary AI and other emerging technologies may reproduce or legitimise these histories of violence. Together, the Technology, Gender and Intersectionality research team contribute their collective knowledge of how intersectional gender studies advances our understanding of the relationship between technology and humanity. 

The Technology, Gender and Intersectionality research team consider key questions regarding gender, race, power and AI. Their work focuses on the following themes: 

AI, Intersectionality, and Identity 

AI systems illuminate the contested terrain of gendered and racialised identity categories and the encoding of these identities in new and emerging technologies. We consider: How do AI systems conceive of gender and race? How does AI produce and reify race and gender? How do gender and race interact in AI processes and systems? How do developers, practitioners, and users of these technologies conceive of gender and race? How do they grapple with complex gendered and racial identity categories, and how are these identities coded into AI technologies and algorithms? 

AI and the Exacerbation of Human Biases 

In intensifying human bias, AI can make visible the ways in which racist and sexist processes materialise. 

We consider: How and why is AI prone to perpetuate harmful stereotypes around gender and race, and what steps can AI developers take to combat these stereotypes? Conversely, how can AI and other emerging technologies actively combat unfair and unconscious biases by making these presumptions and patterns of power visible? Can AI help people acknowledge and change their biased beliefs and behaviour? Moreover, can AI destabilise and denaturalise ideas about gender and race by automating these characteristics (see Halberstam 1991; Hester 2016)? How can intersectional approaches to gender and race inform efforts to mitigate and/or eradicate unfair bias in AI? 

AI and Inequality 

AI is perceived as a neutral, unbiased tool that makes fairer and more equitable decisions than human beings, but AI can replicate and intensify the political and sociocultural conditions within which it is embedded. Hence, attempts to use AI to address inequality might actually exacerbate the inequality it attempts to solve. 

We consider: How does AI entrench or accentuate existing inequalities? Who makes AI, and how does this affect the design and output of AI technologies? How does the exclusion of marginalised groups and individuals from the design process and from the datasets used in machine learning result in inequitable systems? Do harmful and unequal outputs emerge from poorly-designed AI? Are they generated by the improper application of these tools to scenarios that they are ill-equipped to address? How can they take into account political, social and cultural contexts in order to work equitably? Or does AI operate through logics, ideas, and systems that are derived from histories of sexism and racism?  

Imagining AI: AI and Representation 

As part of the CFI’s AI Narratives research stream, the Gender and Technology project examines how the ways in which AI is imagined, narrated and visualised translates into the development and deployment of AI. 

We consider: How is AI gendered and raced in popular culture, gaming spaces and commercial products? How is AI embodied in film, advertising, novels and television programmes in ways that reproduce or challenge stereotypes? How do narratives about AI tangibly shape its production, design and reception? How do critical and non-Western depictions of AI creatively and radically re-imagine AI beyond White, Western frameworks? 

References: 

Chinoy, Sahil (2019) ‘The Racist History Behind Facial Recognition’, The New York Times, July 10. Retrieved 27/10/2020. URL: https://www.nytimes.com/2019/07/10/opinion/facial-recognition-race.html?login=email&auth=login-email 

Halberstam, Judith (1991) "Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine." Feminist Studies 17.3: 439-460.

Hester, Helen (2016) ‘Technically Female: Women, Machines, and Hyperemployment’, Salvage Zone, August 8. Retrieved 27/10/2020. URL: https://salvage.zone/in-print/technically-female-women-machines-and-hyperemployment/