Different Explanations

Determining the requirements for explainability for different stakeholders in Social Policy

Project Team

Hugh Shanahan
Department of Computer Science, Royal Holloway, University of London
Contact: Hugh.Shanahan@rhul.ac.uk

Project timeframe

Start: September 2019
End: November 2019

Co-Investigators:

David Denney (Royal Holloway, University of London)

Supporting Partner(s) 

Jun Zhao, Menisha Patel, University of Oxford, Entrust Project

Making decisions more transparent

Machine Learning / Artificial Intelligence methods are increasingly being used to make decisions about our lives, using algorithms. However, no-one truly understands how these decisions are being made – the process is opaque, even for the people who have developed the applications that make them. This is because parameters in application models are determined by data that may contain hidden biases.

This can lead to injustice in two ways:

  • It is difficult to appeal against such decisions: an organisation in power may reject an appeal based on human analysis as this could be considered inferior to a decision made by a machine.
  • It is difficult to find out who or what is at fault when things go wrong because systems cannot be reviewed and analysed – therefore processes cannot be improved.

This project aims to …

Come up with a set of recommendations for policies that will help to combat bias in algorithms and the lack of understanding around how algorithmic processes work.

Specifically, it will …

Investigate how policies on algorithm behaviour can be formulated to be fair in areas such as Policing and Social Housing.

This project will identify who is affected by this problem and find out what they need, before going on to establish a set of practical criteria for policy-makers based on this. It will not discourage the use of algorithms in decision-making but will encourage an awareness that they are imperfect tools.

A one-day workshop will generate a set of policy-based Use Cases as well as identifying the key stakeholders, what their requirements are and their level of expertise. It would also consider what other factors communities feel are important in the interests of fairness.

Two sets of recommendations will be constructed on the basis of the workshop:

How to adopt AI/ML algorithms

What to take into account when developing AI/ML

This project’s social impact is …

… to ensure that social policies take into account that Machine Learning approaches are powerful and useful but imperfect, and need to come under scrutiny.

Ultimately, the research could lead to more socially just algorithmic decision-making.

Explanation of key terms

AI / Artificial Intelligence: Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans, in areas such as voice-recognition, language translation, visual-perception and decision-making. Examples of AI in day-to-day life include spam filters and smart personal assistants, such as Siri or Alexa.

ML/ Machine Learning: Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. 

Algorithm: To write a computer program, you have to tell the computer, step by step, exactly what you want it to do. The computer then uses these steps in calculations to ‘execute’ the programmed instructions, following each step mechanically, to solve a problem/accomplish the end goal. The algorithm is the term given to this list of steps. Examples include search engines such as Google and Yahoo, which have their own algorithms which rank websites for each keyword or combination of keywords searched by the user.