Thinking

Gender and the Machine: Care under Conditions of Constraint

In this article

What does care look like when emotions are monitored by algorithms, precarious researchers hold institutions together, and trans communities build survival infrastructures with almost no resources?

Gender and the Machine was co-organised by AI Ethics & Society and the Gender + Sexuality Data Lab, with support from the Edinburgh Futures Institute. Gender and the Machine contributes to a lively ecosystem of EFI-supported activities that explore the crossroads where data meets society, including AIES’s annual Doctoral Colloquia and Edinburgh Futures Institute’s Critical Data Studies cluster.

This blog was first published on the Gender + Sexuality Data Lab website hosted at the University of Edinburgh Business School.


On 20 November 2025, the University of Edinburgh Business School hosted Gender and the Machine: Trans Tech, Emotion and AI, a half-day event organised by the Gender + Sexuality Data Lab and the AI Ethics & Society research group. Bringing together students, researchers, community organisers and members of the public from across the UK, the event explored how contemporary AI systems – and the political economies that underpin them – intersect with gender, identity and emotion.

The programme was structured around three elements: an opening talk by Dr Nazanin Andalibi (University of Michigan) on emotion AI; a fireside panel chaired by Alice Ross (Doctoral Researcher, University of Edinburgh) that foregrounded early-career researchers’ (ECRs) perspectives; and a closing talk by Dr Oliver Haimson (University of Michigan) on trans technologies.

Feeling Rules and Emotion AI

Dr Nazanin Andalibi’s talk, Emotion AI: Utopian Promises, Dystopian Realities, focused on the growing field of ‘emotion AI’ – systems that claim to infer people’s feelings from facial expressions, vocal patterns, text or bodily signals. These tools are increasingly sold as infrastructure for managing workers, consumers and citizens. In recruitment, AI video-interview platforms such as HireVue (until recently used in Unilever’s graduate hiring) have promised to score candidates’ enthusiasm, confidence and ‘emotional intelligence’ by analysing facial movements and tone of voice. Advertising and User Experience (UX) firms like London-based Realeyes offer webcam-based tools that classify viewers’ emotional responses to online content for major brands, optimising campaigns around ‘attention’ and ‘engagement’.

Related biometric analytics are also being trialled by public authorities and large organisations – in transport, retail, education and ‘smart city’ projects in Europe, China and the Gulf – to infer levels of stress, satisfaction or threat in crowds and classrooms. Across these settings, vendors market emotion- and behaviour-recognition as tools for monitoring wellbeing, motivation or risk, promising early detection of burnout, conflict or security problems and, ultimately, more ‘efficient’ and ‘healthy’ workplaces, schools and public spaces.

Andalibi invited the audience to look past the utopian marketing claims that emotion AI can monitor wellbeing, motivation and risk, detect burnout or conflict early and create ‘healthier’ workplaces. Emotion AI, she argued, is never neutral: it builds on particular theories of what emotions are, how they appear on bodies, and which bodies count as legible. In practice, such systems can easily become mechanisms for enforcing normative ‘feeling rules’ or emotional labour at work – expectations of calmness, positivity and availability that have long been unevenly distributed along lines of gender, race, class and disability. Some workers may learn to ‘perform’ the right expressions for the machine, but this requires additional emotional labour for technologies which are built on doubtful scientific foundations in the first place.

Illustrating this with survey and interview research, Andalibi showed that many people consider emotion data to be among the most intimate information about them, closer to therapist notes than to ordinary behavioural metrics. Overall, respondents in her studies held negative views of emotion AI, but identity mattered in complex ways: for example, Andalibi and Ingber (2025) found that cisgender women reported less positive attitudes than cisgender men, while people of colour reported more positive attitudes than white participants, and disabled, trans and non-binary respondents were less comfortable with workplace deployments. In a companion study of AI-mediated job interviews, transgender jobseekers perceived significantly lower procedural and distributive justice than cisgender men when emotion-recognition tools were used to evaluate their performance.

For Andalibi, these findings support a simple claim: emotion data should be treated as sensitive by default. Workplace, consumer-protection and data-protection regulators—ranging from U.S. agencies such as the Federal Trade Commission (FTC), the Equal Employment Opportunity Commission (EEOC), state privacy authorities and the U.S. Patent and Trademark Office (USPTO) to EU AI and data-protection institutions and national data-protection authorities such as the UK Information Commissioner’s Office (ICO)—should ground their decisions in the experiences of those most uneasy about these systems, rather than in the optimistic promises of their vendors.

Three people stand smiling in front of a screen displaying the words “Panel Discuss.” The background shows a colorful illustration with people, digital devices, and abstract shapes.
Nazanin Andalibi, Alice Ross and Oliver Haimson at Gender and the Machine

Early-Career Perspectives: Doing Gender and AI Research

The fireside panel chaired by Alice Ross (a Doctoral Researcher at the University of Edinburgh) shifted attention from technologies to the people researching them. Ross guided dialogues on what it means to work on gender, data and AI within contemporary universities, where funding streams, institutional priorities and global politics shape what type of research is possible.

One clear theme was collaboration. Panellists spoke about the pleasure – and necessity – of thinking, writing and organising together, challenging cultures of competition and the imaginary of the lone researcher who codes, theorises, documents harms, supports affected communities and transforms institutions single-handedly. At the same time, they stressed that collaboration has to be done with care: community partners, practitioners and colleagues in more precarious or marginalised positions are often central to projects but not always recognised as key contributors when papers are written or credit is allocated. Making those contributions visible, and avoiding extractive research on already overburdened communities, were framed as part of the ethical work of doing gender and AI research in a world marked by ongoing state violence and uneven protections – including the ongoing genocide and destruction of Palestinian life in Gaza.

Trans Technologies, Agency and Trans Capitalism

In the final talk, drawing on his book Trans Technologies, Dr Oliver Haimson asked what happens if we think of technology, following academic and artist Sandy Stone, as ‘anything that extends our agency’. From that perspective, ‘trans technologies’ are not just specialist apps for trans users but the wide array of tools, infrastructures and practices through which trans people try to gain greater control over their lives. These include both mainstream platforms repurposed for community needs – social media, messaging apps, shared documents, mutual-aid spreadsheets – and tools built specifically around trans concerns, such as resources for navigating name and gender marker changes, safety and mapping apps, and community archives.

Haimson shared the concept of ‘trans capitalism’ to describe the commodification and monetisation of trans identities, from targeted marketing and brand campaigns to venture-backed ‘trans tech’ companies. Investment-funded tools may scale quickly and bring greater trans visibility, but they often offer narrow, market-based solutions that do not address core community needs around healthcare, housing precarity, unemployment and violence. For example, Haimson points to the venture-capital-backed company Euphoria, whose Bliss ‘trans banking’ app and related products channel investment into niche financial services for already-banked trans consumers at the very moment when many trans community members were calling instead for resources to secure basic needs such as gender-affirming care, housing and income support.

Alongside these companies, he highlighted a less visible landscape of under-resourced, community-led trans technologies: mutual-aid infrastructures, protest-finding sites, memorial projects and safety tools sustained by a handful of community organisers working unpaid. Here, success is measured less in revenue than in whether projects keep people alive, redistribute resources and share power horizontally. Rather than dividing the field into ‘good’ grassroots tools and ‘bad’ commercial ones, Haimson encouraged the audience to attend to ambivalence and to the funding models that shape which trans futures can be built and sustained.

A person gestures while presenting in front of a screen displaying water bottles labeled with pronouns: "He/Her," "He/They," "She/They," and "She/He/They." The year "2021-2021" appears on the slide.
Oliver Haimson sharing ideas from his book, Trans Technologies

Reflections and Concluding Thoughts

Across these three strands – emotion AI in the workplace, the labour of ECRs and trans technologies – a common thread was care under conditions of constraint: who is asked to regulate their feelings, confront institutional and structural pressures, or build infrastructures of survival when existing systems fail. Emotion-recognition tools expect workers to perform legible, acceptable emotions for opaque systems; ECRs take on significant affective and ethical labour when documenting AI’s harms; and trans communities develop mutual-aid infrastructures and technologies to make life possible in the shadow of hostile laws and markets.

From a UK perspective, one striking aspect of the discussion was how far practice has already run ahead of public debate. In the retail sector, grocery store chain Southern Co-op has installed Facewatch live facial-recognition cameras in multiple branches, scanning customers entering the store against a private watchlist – a deployment that has prompted legal complaints and regulatory scrutiny. In transport, documents obtained by campaigners show that Network Rail and train operators quietly trialled Amazon’s Rekognition system in several major stations to infer passengers’ age, gender and ‘emotions’, with suggestions that the data could feed into future advertising and crowd-management tools.

Earlier experiments in public spaces in London, such as Plan UK’s Because I’m a Girl adverts at bus stops on Oxford Street, used facial recognition to discern a passer-by’s gender and alter content accordingly – a widely reported example of targeted outdoor advertising that now looks like a precursor to broader demographic and data-driven profiling of people in public space.

Emotion AI also does not sit apart from facial recognition – it rides on the same technical stack. In practice, that stack means the same cameras, the same CCTV feeds and the same sequence of software steps: first the system finds a face, then it lines it up, then it classifies it. In one context, the label might be a name on a watchlist; in another, it might be an age range or an estimated gender; with an extra layer turned on, it becomes an ‘emotion’ score – happy, angry, stressed, engaged or distracted. Once this facial-surveillance stack is in place, adding emotion AI is mostly a matter of switching on another classifier rather than building something entirely new. That is why debates about facial recognition and debates about emotion AI cannot really be separated: they share the same infrastructure and tend to appear in the same high-stakes spaces.

These deployments therefore layer with biometric technologies whose scientific foundations and fairness are themselves contested. At a university like Edinburgh, it is hard not to hear echoes of earlier ‘scientific’ attempts to read character and intellect from faces and skulls: as the recent Decolonised Transformations report shows, late eighteenth- and nineteenth-century professors and students helped to develop racialised physiognomic and craniological hierarchies that ranked human groups by facial form and cranial size, with whiteness treated as the norm.

Emotion-recognition systems largely rely on the claim that there are universal emotions reliably expressed on the face, a view that prominent psychologists and regulators such as the UK ICO have challenged as oversimplified and potentially discriminatory when embedded in automated decision-making. Bias in the underlying facial-analysis pipelines compounds these concerns, with error rates consistently highest for racialised women and other marginalised groups, while disability-focused work has highlighted how many systems misrecognise atypical bodies and expressions, treating disabled people as anomalies. Civil-society organisations have similarly documented how automatic gender and emotion recognition misgender trans and non-binary people and undermine their ability to self-identify, particularly in public spaces and when using transport systems.

At the same time, UK regulation remains largely tech-agnostic. By contrast, the European Union’s AI Act has adopted a more tech-conservative approach, prohibiting certain uses of emotion recognition in workplaces and education, while the United States still relies on a patchwork of sectoral privacy and consumer-protection laws and emerging FTC guidance, leaving significant gaps around emotion profiling. The main friction for emotion AI, and related biometric analytics, in the UK comes indirectly through data protection, equality and employment law, with recent guidance from the Information Commissioner warning against the use of ‘emotional analysis’ for meaningful decisions about people. Recent empirical work with UK adults suggests that public concern has already outpaced policy, with large majorities expressing worry about emotional AI’s potential for manipulation, particularly in social media and children’s toys (so called ‘emotoys’), and supporting stronger civic protections and regulatory limits.

For me, Gender and the Machine underscored that these developments are not merely technical or ethical questions but political and democratic ones. If retailers, transport operators, advertisers and public bodies are already experimenting with systems that classify, monetise and sometimes punish us on the basis of inferred emotions and biometrics – often without our knowledge, and with disproportionate risks for racialised, trans and disabled people – then bringing these practices into the open, contesting claims of consent, and centring what Kevin Guyan calls ‘box breakers’ – those whose lives do not fit neatly into existing data categories and who are often most uneasy about such systems – are necessary starting points for deciding collectively whether such systems should be permitted at all. In a world marked by global ‘democratic’ backsliding, recentring open, transparent discussion and accountability is vital to securing any return to fuller democracy.

Image Credit: Hanna Barakat & Cambridge Diversity Fund, https://betterimagesofai.org


About the author

Adam Ferron is completing a PhD in Islamic and Middle Eastern Studies at the University of Edinburgh. Their research sits at the intersection of technology and political power.

Further reading

  • Ahmed, Sara. The Cultural Politics of Emotion (2nd edn, Edinburgh University Press, 2014).
  • Andalibi, Nazanin. ‘Emotion AI Will Not Fix the Workplace’, Interactions 32(2), 33–35 (2025), https://doi.org/10.1145/3714419.
  • Andalibi, Nazanin & Alexis Shore Ingber. ‘Public Perceptions About Emotion AI Use Across Contexts in the United States,’ Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25), ACM, 2025.
  • Bakir, Vian, et al. ‘On manipulation by emotional AI: UK adults’ views and governance implications’, Frontiers in Sociology (2024).
  • Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code (Polity, 2019).
  • Buolamwini, Joy & Timnit Gebru. ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT*, 2018).
  • Catanzariti, Benedetta. “Taming Affect: On the Construction of Objectivity in Data Annotation Practices.” Digital Society 4, no. 2 (2025): 40
  • Couldry, Nick & Ulises A. Mejias. The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism (Stanford University Press, 2019).
  • Dencik, Lina, et al. Countermeasures: A Right to Reasonable Inferences in the Age of Emotional AI (Ada Lovelace Institute, 2025).
  • Guyan, Kevin. Rainbow Trap: Queer Lives, Classifications and the Dangers of Inclusion (Bloomsbury Academic, 2025).
  • Haimson, Oliver L. Trans Technologies (University of California Press, 2024).
  • Haraway, Donna J. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century,” in Simians, Cyborgs and Women: The Reinvention of Nature (Routledge, 1991).
  • Katyal, Sonia K & Jessica Y. Jung. “The Gender Panopticon: AI, Gender, and Design Justice,” UCLA Law Review 68, 692–761 (2021).
  • Potvin, Jacqueline. “Governing Adolescent Reproduction in the ‘Developing World’: Biopower and Governmentality in Plan’s ‘Because I’m a Girl’ Campaign,” Feminist Review 122(1), 118–133 (2019).
  • Raha, Nat & Mijke Van Der Drift. Trans Femme Futures: Abolitionist Ethics for Transfeminist Worlds (Pluto Books, 2024).
  • Rosanne, Allucquère. “Sandy” Stone, The War of Desire and Technology at the Close of the Mechanical Age (MIT Press, 1995).

Join us to challenge, create, and make change happen.

#ChallengeCreateChange