News Data & AI Ethics

The ethical value of transparent AI

In this article

Researchers are investigating the ethical value of transparent Artificial Intelligence (AI) – ensuring that human societies benefit from increasing automation.

Researchers are investigating the ethical value of transparent Artificial Intelligence (AI) – ensuring that human societies benefit from increasing automation.

Edinburgh Futures Institute focuses on exploring multidisciplinary solutions to the challenges that we may face our data-driven future. One pressing question in current AI research is how AI decision-making systems will impact society. Dr John Zerilli, Edinburgh Futures Institute Chancellor’s Fellow based in the School of Law, argues that we need to make sure that AI decision-making systems are transparent and explainable so that they don’t violate people’s rights.

Transparent and explainable AI

Given the ethical ramifications of the use of AI it is vital that we understand how AI decision-making systems work.

Using an analytical toolkit that draws from philosophy and the mind sciences, Dr Zerilli investigates the importance of creating explainable AI decision-making systems. He believes that AI decision-making systems should be transparent and understandable for the people who are affected by the systems.

Despite the allure of objectivity and scientific neutrality, AI decision-making systems may contain dangerous biases. Focusing on ethical risks, Dr Zerilli suggests that we should closely examine decision-making systems, especially those that can potentially lead to human mistreatment.

For example, there is a need to examine AI that decides if prisoners should receive parole, or AI that banks use to determine people’s credit worthiness. In these examples, there is a high risk of AI systems being unfair and discriminatory. It’s important that ordinary people understand how they work to make sure  decisions employed by the systems are fair and correct.

Human dignity       

Why is it so important to make AI transparent? Dr Zerilli grounds his answer in human dignity.

Human beings are animals whose ability to explain themselves to one another is an essential expression of their sociality”, says Dr Zerilli.

In other words, explanations matter profoundly to human beings. When AI decision-making processes are well-publicised and easy to understand, people can order their affairs in a stable, consistent way. It ensures that everyone is treated fairly, consistently, and according to due process.

With explainable AI, human beings are respected as self-determining centres of action. Dr Zerilli argues: “One of the clearest marks of a person’s proper and respectful treatment is that they are given explanations for actions that affect them”.

The future of explainable AI

According to Dr Zerilli, AI and data innovation have a significant impact on democracy, procedural justice, and due process. We will likely need to modify laws and political systems to ensure that they remain fair and democratic.

Researcher profile

Dr John Zerilli is a philosopher with particular interests in cognitive science, artificial intelligence, and the law. He is the Chancellor’s Fellow (Assistant Professor) in AI, Data, and the Rule of Law at the University of Edinburgh, a Research Associate in the Oxford Institute for Ethics in AI at the University of Oxford, and an Associate Fellow in the Centre for the Future of Intelligence at the University of Cambridge.

Join us to challenge, create, and make change happen.

#ChallengeCreateChange