Social Justice in the Digital Economy: Summer Webinar Series

Monday 7th June // 6pm – 7:30pm BST

The Evolution of Social Justice in the age of Networks and Machine Learning

Join us this evening where world-leading researchers and academics in Design, HCI and Human Rights help us explore the ‘life’ of data; how thinking about AI as relational infrastructures changes the ethical questions and concerns we work with; and how Big Tech and AI are challenging traditional ways of conceiving human rights within the law.

The Evolution of Social Justice in the age of Networks and Machine Learning is our first webinar event, part a week-long series of talks and panels exploring social justice and the digital economy. Sign up to this and our other webinars here. 

Keynote Speakers

  • AI as Relational Infrastructure, Prof Irina Shklovski, University of Copenhagen
  • Interrogating the Machine Learning Pipeline from Within, Dr Michael Muller, Research and Master Inventor, IBM
  • Prof Lorna McGregor, Human Rights Centre, Essex University

Chair: Prof Ann Light, Sussex University


BIO

Irina Shklovski is professor of human-centred computing at the University of Copenhagen. She works across many disciplines, focusing on ethics in technology development, information privacy, social networks, and relational practices. Her projects address responsible technology design, data governance, online information disclosure, the use of self-tracking technologies, data leakage on mobile devices and the sense of powerlessness people experience in the face of massive personal data collection.


BIO

Lorna McGregor is a Professor of International Human Rights Law at Essex Law School, and PI and Director of the multi-disciplinary Human Rights, Big Data and Technology Project (HRBDT) funded with £4.7m from the UK Economic and Social Research Council. Lorna is a Co-Chair of the International Law Association’s Study Group on Individual Responsibility in International Law and a Contributing Editor of EJIL Talk!. She was the Director of the Human Rights Centre at the University of Essex for two terms (2013 – 2019) and has held positions as a Commissioner of the British Equality and Human Rights Commission (2015 – 2019) and as a trustee of the AIRE Centre. Prior to becoming an academic, Lorna held positions at REDRESS, the International Bar Association, and the International Centre for Ethnic Studies in Sri Lanka.

Human Rights Implications of New and Emerging Technologies

This talk will examine how the design, development and deployment of new and emerging technologies, including artificial intelligence (AI) technologies, within the public sector can impact human rights. It will analyse the adequacy and effectiveness of the international human rights law response to the risks posed by the use of AI technologies, the extent to which it complements and builds on emerging AI ethical principles and data protection laws, and the gaps remaining. It will also discuss dedicated AI governance frameworks through national AI strategies and the draft EU regulation.


BIO

Michael Muller works as a research staff member at IBM Research AI, on the traditional and unceded lands of the Wampanoag and Massachusetts peoples (known to settlers as Cambridge Massachusetts US). He works at the intersection of social science and computer science. His current work explores configurations of human-AI collaboration, and his past projects have examined the work-practices of data science workers. Michael co-proposed the new CHI 2021 review subcommittee on Critical and Sustainable Computing, and served on that committee during its first year. He also serves on the SIGCHI CARES committee. ACM kindly recognized Michael as a Distinguished Scientist.

Interrogating the Machine Learning Pipeline from Within

Recent works such as Coded Bias, Data Feminism, and Data Justice, have made a public case for problems with the outcomes and methods of data science and machine learning projects, which can have profound and often unfair impacts on humans. Through a series of workshops on “Human Centered Data Science” and “Interrogating Data Science,” we have begun to understand some of these human processes. I will present a layered model of machine learning practices, beginning with a foundational layer of a measurement plan, and ending with a deployed system. In this model, each layer tends to assume that the layers that preceded it were flawless, and thus erases the uncertainties and potential biases of the human work in the ”lower” layers on which the current layer is built.

We can begin to repair these weaknesses, but we will need to pay more attention to the necessarily human and collaborative work-practices of data science, and we will need to re-think our technologies to preserve a more transparent and accountable provenance of human decisions and human outcomes that contribute to data science applications.