OII Colloquia: What’s wrong with using machine learning to personalise law?
25 January 2018
About this video
In the legal services industry, automation and the power of machine learning has been transforming legal practice, by automating tasks that had previously been highly labour intensive and typically undertaken by junior lawyers, such as document management, legal discovery and due diligence. Although there is increasing concern about how these digital transformations will affect the job market for lawyers, and for legal services more generally, less attention has been given to the way in which machine learning might be used to ‘personalise’ the law. In particular, several legal scholars claim that the data-driven techniques used by commercial providers such as Amazon and Google to personalise the user’s experience (whether in the provision of shopping recommendations or responses to search queries) could also be used to ‘personalise’ the law. In this paper, we consider the claims made by those who believe that such a move would enhance the administration of contemporary legal systems. In so doing, we offer our preliminary thoughts, suggesting that these proposals are not merely morally problematic, but – if implemented – might fundamentally threaten the rule of law upon which the legitimacy of contemporary legal systems rests.
About the speaker
Prof Karen Yeung has recently taken up the post of Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham in the School of Law and the School of Computer Science and is currently a Distinguished Visiting Fellow at Melbourne Law School. Karen came to the United Kingdom from Australia in 1993 as a Rhodes Scholar to read for the Bachelor of Civil Law at Oxford University, after completing a combined Law/Commerce degree at the University of Melbourne. She spent ten years as a University Lecturer at Oxford University and as a Fellow of St Anne’s College, where she wrote her DPhil, before taking up a Chair in Law at King’s College London in September 2006 to help establish the Centre for Technology, Law & Society (‘TELOS’), occupying the role of Director since 2012 until the end of 2017. Her research expertise lies in the regulation and governance of, and through, emerging technologies, with her more recent and on-going work focusing on the legal, ethical, social and democratic implications of a suite of technologies associated with automation and the ‘computational turn’, including big data analytics, artificial intelligence (including various forms of machine learning), distributed ledgers (including blockchain) and robotics. Her work has been at the forefront of nurturing ‘law, regulation and technology’ as a sub-field of legal and interdisciplinary scholarship. She is keen to foster collaboration between academics and policy-makers across various disciplines concerned with examining the social, legal, democratic and ethical implications of technological development, and in seeking to promote informed, reflective technology policy making and implementation.
Prof Timothy Endicott has been Professor of Legal Philosophy since 2006, and a Fellow in Law at Balliol College since 1999. Professor Endicott writes on Jurisprudence and Constitutional and Administrative Law, with special interests in law and language and interpretation. He served as the Dean of the Faculty of Law for two terms, from October 2007 to September 2015. He is the author of Vagueness in Law (OUP 2000), and Administrative Law, 3rd ed (OUP 2015). After graduating with the AB in Classics and English, summa cum laude, from Harvard, he completed the MPhil in Comparative Philology in Oxford, studied Law at the University of Toronto, and practised as a litigation lawyer in Toronto. He completed the DPhil in Law in Oxford in 1998. He has been General Editor of the Oxford Journal of Legal Studies since 2015.