News Data & AI Ethics

Edinburgh Professors to lead multidisciplinary AHRC Programme on AI

In this article

Two Edinburgh researchers will lead a project to ensure artificial intelligence (AI) and data are used responsibly and ethically across society and industry.

The Arts and Humanities Research Council (AHRC), part of UK Research and Innovation (UKRI), have appointed Edinburgh University Professors Shannon Vallor and Ewa Luger to direct the £3.5 million programme ‘Enabling a Responsible AI Ecosystem’ in collaboration with the Ada Lovelace Institute.

Professor Shannon Vallor holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence in the School of Philosophy, Psychology and Language Sciences and is Director of the Centre for Technomoral Futures at the Edinburgh Futures Institute.

Professor Ewa Luger holds a personal chair in Human-Data Interaction within the School of Design, is Co-Director of the Institute of Design Informatics, and Director of Research Innovation for Edinburgh College of Art.

They will work closely with the Ada Lovelace Institute, selected by AHRC as a Collaborating Partner for the Programme, to define and shape the research priorities and themes, and deliver other activities to support a responsible AI ecosystem.

A multidisciplinary research programme

Through harnessing the expertise of researchers and innovators from a range of disciplines, the three-year AHRC programme will develop a responsible AI ecosystem which is responsive to the needs and challenges faced by policymakers, regulators, technologists, and the public.

The programme will build connections between academia, industry, policy, regulation and the public to help build confidence in AI, enable innovation, stimulate economic growth and deliver wider public benefit.

Professor Shannon Vallor, said:

“For AI technologies to be successfully integrated into society in ways that promote shared human flourishing, their development has to be guided by more than technical acumen. A responsible AI ecosystem must meld scientific and technical expertise with the humanistic knowledge, creative vision and practical insights needed to guide AI innovation wisely. This programme will work across disciplines, sectors and publics to lay the foundations of that ecosystem.”

Professor Ewa Luger, said:

“We have reached a critical point within responsible AI development. There now exists a foundation of good practice, however it is rarely connected to the sites where innovation and change happen, such as industry and policy. We hope that this programme will make new connections, creating an ecosystem where responsibility is not the last, but the first thought in AI innovation.”

Enabling a Responsible AI Ecosystem is the first large-scale research programme on AI ethics and regulation in the UK.

Find out more about the Centre for Technomoral Futures

Join us to challenge, create, and make change happen.