Ethics of machine learning in children’s social care
Abstract
Across the press, academia, and the worlds of policy and practice, concerns abound about the possible impacts of the growing use of machine learning (ML) in children’s social care (CSC) on individuals, families, and communities. Many express legitimate worries about how the depersonalising and de-socialising effects of trends toward the automation of CSC are harming the care environment and negatively altering the way frontline workers are able to engage with families and children. Others raise concerns about how these data-driven ML systems are merely reinforcing, if not amplifying, historical patterns of systemic bias and discrimination. Others, still, highlight how the mixed results of existing ML innovations are signalling widespread conditions of poor data quality and questionable data collection and recording practices.
It is against this backdrop that What Works for Childrenās Social Care (WWCSC) commissioned The Alan Turing Institute and the Rees Centre, University of Oxford to write this report on the research question āIs it ethical to use machine learning approaches in childrenās social care systems and if so, how and under what circumstances?ā. The findings we present here take some preliminary steps to providing an answer. We offer a three-tiered framework for thinking about the ethics of ML in CSC, encompassing ethical values, principles, and professional virtues and apply them to the specific circumstances of children’s social care in England and technical details of ML.
This research is informed by a range of methods ā a literature review, an integrative examination of existing ethical frameworks in social care and ML, a stakeholder roundtable with 31 participants, and a workshop with 10 family members who have lived experience of childrenās social care.
External Links
Key Information
Date published: 1 Jan 2020