How can we democratically govern algorithms for more socially-responsible public services?
Lecturer in Human Geography, School of Environmental Sciences, University of East Anglia
Jason Chilvers, Reader in Environmental Social Sciences,
School of Environmental Sciences, University of East Anglia
Dr Catherine Price, Researcher
Involve (led by Simon Burall)
Start: September 2019
End: April 2020
Algorithms that deepen discrimination in public services
Algorithms are increasingly being used in the delivery of public services. The use of algorithms can improve efficiency, remove the need for humans to perform menial tasks and the risk of human error, and allow greater personalisation of services. However, the adoption of algorithms across a wide range of applications has also had negative consequences.
Algorithms raise specific social justice issues which are particularly acute in the domain of public services. For example they can repeat and even amplify sexist and racist tropes, because they learn from existing datasets. Also, the vast amounts of data needed to train algorithms are often obtained and combined in ways in which the people providing the data are unaware.
US research has shown that the costs of the adoption of algorithms in public services fall on already marginalized communities. For example, in welfare and taxations systems, algorithms are used to discourage the lowest income groups from claiming welfare payments. A piece of software used in the Florida justice system to predict the likelihood of reoffending and make decisions on early release was more likely to recommend black inmates for lengthened sentences.
The adoption of algorithms in public services like policing, education, healthcare and immigration has raised further concerns, around surveillance, the growing influence of private companies and the compulsory collection of biometric data.
Appropriate regulation and oversight of the use of algorithms in public services is therefore needed. Algorithms should be adopted to address genuine needs and problems, and the datasets from which important decisions are made should be contestable, transparent and accountable.
This project aims to …
… develop a practical response to pressing social justice issues presented by the digital economy, by researching the use of algorithms in public services in the UK and providing a roadmap for the responsible and democratic governance of these technologies.
It aims to provide the most comprehensive picture yet of citizen responses to the ways in which algorithms are being adopted in and around UK public services, from welfare payments to policing, healthcare and immigration. It will map the different ways in which citizens are engaging with them and identify forms of public engagement that are not taken into account when decisions on policy are being made, such as public protests or community-led initiatives.
Specifically, it will …
… co-design and provide a roadmap towards an observatory for algorithms and society, in order to improve democratic oversight and socially responsible development of algorithms in public services.
The project will
Review existing work on algorithms in public services, current responsible research and innovation frameworks around algorithms, and emerging literature on public observatories; and map existing examples of public engagement around the use of algorithms in public services in the UK.
Hold a stakeholder workshop to reflect on the findings of the review and mapping work, conduct initial foresight around algorithms in public services, and co-design the institutional blueprint of the observatory for algorithms and society.
Propose a blueprint for an observatory for algorithms and society which would continually map public engagements with the use of algorithms in public services, consider the future development of these approaches, and apply these insights to the governance of algorithms in public services.
This project’s social impact is …
to make the case for – and propose further steps to enable – continuous democratic oversight and the responsible development of socially just algorithms in public services.
The key outputs of the project will be an academic paper presenting key findings from the review work, a mapping and stakeholder workshop on how to responsibly govern the use of algorithms in public services, and a presentation of the proposed design of the observatory for policy-makers, practitioners and concerned citizens.
It is innovative because …
The proposal specifically focuses on the use of algorithms in public services, whereas the focus of much previous work has been on the ways in which algorithms are used in private sector contexts.
The project also builds on recent calls for the development of observatory structures around pressing public issues. This new concept recognises that there are multiple publics, forms of engagement and issue definitions in play around any given emerging technology, and can also capture broader trends, injustices and connections over time.
A further innovation of the project is to link the observatory concept with the practical application of responsible research and innovation frameworks. This project is also novel in bringing together academic innovations around concepts of public engagement (UEA), with recent innovations in public engagement practice (Involve).
An observatory designed to take into account the socio-economic contexts of algorithmic justice in public services and a toolkit for their further inclusion in the governance of these technologies.