Hybrid event

Technomoral Conversations: AI & Creative Labour

A conversation about the role of AI in creative practice and culture.

10 April 2025
6pm - 7:30pm
Loading Events

« All Events

  • This event has passed.

Technomoral Conversations: AI & Creative Labour

10th April 6:00 PM 7:30 PM BST

Free

The Technomoral Conversations series brings together leaders, creators and innovators from academia, technology, business and the third sector in a ‘fireside chat’ format to discuss futures that are worth wanting. Focusing on AI and creative labour, this Technomoral Conversation will look at issues ranging from the AI industry’s copyright violations, to the responses from creatives in the UK and elsewhere, to the wider ethical and political questions about the role of AI in creative practice and culture. 

The Centre for Technomoral Futures

The Centre for Technomoral Futures focuses on the ethical implications of present and future advances in AI, machine learning and other data-driven technologies. It supports work and research in these areas across the Futures Institute, the University of Edinburgh and with a wide portfolio of partners, projects and networks. 

As part of Edinburgh Futures Institute, the Centre’s shared goal is to help people create and shape more resilient, sustainable and equitable forms of life. The Centre for Technomoral Futures is a home for developing more constructive modes of innovation: innovation that preserves and strengthens human ties and capabilities; that builds more accessible and just paths to public participation in the co-creation of our futures; and that reinvests the power of technology into the repair, maintenance and care of our communities and our planet. 

Bridging Responsible AI Divides (BRAID)

BRAID is a UK-wide programme dedicated to integrating Arts and Humanities research more fully into the Responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI. They represent a six-year, £15.9 million investment in enabling responsible AI in the UK, funded by the Arts and Humanities Research Council (AHRC) and running from 2022 to 2028. Working in partnership with the Ada Lovelace Institute and BBC, their team brings together expertise in human-computer interaction, moral philosophy, arts, design, law, social sciences, journalism, and AI.

Speaker Biographies

Headshot of Caroline Sinders

Caroline Sinders is an award winning critical designer, researcher, and artist. They’re the founder of human rights and design lab, Convocation Research + Design, and a current BRAID fellow with the University of Arts, London. For the past few years, they have been examining the intersections of artificial intelligence, intersectional justice, harmful design, systems and politics in digital conversational spaces and technology platforms. They’ve worked with the Tate Exchange at the Tate Modern, the United Nations, the UK’s Information Commissioner’s Office, the European Commission, Ars Electronica, the Harvard Kennedy School and others. Caroline is currently based between London, UK and New Orleans, USA. 

Headshot of Paula Westenberger

Dr Paula Westenberger is a Senior Lecturer in Intellectual Property Law and a member of the Centre for Artificial Intelligence at Brunel University of London. She has a Masters and a PhD in IP law from Queen Mary University of London, and is the Deputy Editor of the European Copyright and Design Reports. Her research interests cover the intersection between copyright law, human rights and culture, with a particular focus on the use of digital technologies (including AI) in the cultural heritage sector. She is a BRAID Research Fellow working on the project ‘Responsible AI for Heritage: copyright and human rights perspectives’ in partnership with RBG Kew.

Headshot of Richard Combes

Richard Combes is the Head of Rights and Licensing and Deputy Chief Executive for ALCS. His work focuses on the development of collective rights and licensing schemes in the UK and internationally, aimed at providing writers with fair remuneration for the re-use of their work. This role involves a significant degree of partnership and collaboration with other UK writers’ organisations and licensing bodies as well as authors’ societies and collecting agencies around the world.

Richard’s department is also responsible for engaging with UK and EU policy on copyright and authors’ rights – an area of growing prominence on the political agenda – by drafting responses to government consultations, preparing Ministerial briefings and setting the agenda for the All Party Writers’ Group.

Shannon Vallor headshot

Professor Shannon Vallor serves as Director of the Centre for Technomoral Futures in the Edinburgh Futures Institute. She holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence in the University of Edinburgh’s Department of Philosophy. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018-2020. She is the author of The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024) and Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016). She serves as advisor to government and industry bodies on responsible AI and data ethics. She is also Principal Investigator and Co-Director (with Professor Ewa Luger) of the UKRI research programme BRAID (Bridging Responsible AI Divides), funded by the Arts and Humanities Research Council.

1 Lauriston Place
Edinburgh, EH3 9EF United Kingdom
+ Google Map

Join us to challenge, create, and make change happen.

#ChallengeCreateChange