5 Apr 2018
Part of the University of Oxford’s Science Blog ‘Women in AI’ series, Dr. Sandra Wachter, a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute discusses her work negotiating the legal pitfalls of algorithm-based decision making and an increasingly tech-led society.
What drew you towards a career in AI?
I am a lawyer and I specialise in technology law, which has been a gateway into computer science and science in general.
I’ve always been interested in the relationship between human rights and tech, so a law career was a natural fit for me. I am particularly interested in and driven by a desire to support fairness and transparency in the use of robotics and artificial intelligence in society. As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework.
Law is generally a very male-dominated field, and tech-law even more so. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet. There is a lot of work to done to shift this mind-set.
What research projects are you currently working on?
The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making, which has become increasingly less predictable and the systems more autonomous and complex. I’m interested in what that means for society.
At the moment I am working on a machine learning and robotics project that addresses the question of algorithmic explainability and auditing. For example, how can we design unbiased non-discriminatory systems that give explanations for algorithm-led decisions, such as, whether individuals should have a right to an explanation for why an algorithm rejected their loan application? I have reviewed the legal framework for any loopholes in existing legislation that need immediate consideration and then urged policy makers to take action where needed.
As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework.
What interests you most about the scope of AI?
I am interested in developing research-led solutions that can mitigate the risks that come with an increasingly tech-led society. Supporting transparency, explainability and accountability will help to make machine learning technology something that progresses society rather than damaging it and holding people back.
AI in healthcare has the potential to have a massive positive impact on society, such as the development of products for disease prediction, treatment plans and drug discovery.
It is also an exciting time for healthcare robotics, the emerging fields of using surgical robotics for less invasive surgeries and assisted-living robotics are fascinating.
What are the biggest challenges facing the field?
On a very basic level an algorithm is a predetermined set of rules that humans can use to learn something about data and make decisions or predictions. AI is a very complex, more autonomous and less predictable version of a mundane algorithm. It can help us to make more accurate, more consistent, fairer, and more efficient decisions. However, we cannot solve all societal problems with technology alone.Technology is about humans and society, and to keep them at the heart of future developments you need a multi-disciplinary approach. To use AI for the good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles, particularly ethical and political responsibility, otherwise you get a skewed view.
The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making.
What research are you most proud of?
I published research around the use of algorithms for decision making and showed that the law does not guarantee a right to an explanation for individuals. It shed light on loopholes and potential problems within the existing structure that will hopefully prevent legal problems in the future. In following work we proposed a new method “counterfactual explanations” that could give people meaningful explanations, even if highly complex systems are used.
To use AI for good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles – particularly ethical and political responsibility, otherwise you get a skewed view.
As a woman in science and a woman in law how would you describe your experience?
Law is generally a very male-dominated field, and tech-law even more so. People are often surprised when I go to events and they find out that I am the keynote speaker for the day. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet, and there is a lot of work to done to shift this mind-set.
I think it would help to create more opportunities for women to have more visibility, such as speaking at events. People need to see from a young age that something is as much for one sex as it is for another. I still remember when I was at high school, the Design Technology subjects were split by gender, with boys taking woodwork, while girls learned knitting and sewing. I desperately wanted to do woodwork and build a birdhouse with the boys, but my teacher’s response when I asked was simply that ‘girls don’t do that.’ Young girls need to be supported and encouraged instead of told that they can’t do something.
Who inspires you?
I am very lucky, my grandmother was one of the first women to be admitted to a tech-university, so I grew up with a maths genius as one of my role models. People need to see that gender isn’t a factor in opportunity, it is about passion, dedication, and talent.
It is the University’s first AI Expo tomorrow, what would you like the event’s legacy to be?
This event is a very important step forward for the University and I hope that it will inspire more events like it in the future. AI is a rapidly emerging field and it is really important to raise awareness and show the world that Oxford not only takes it seriously, but that we are working to use AI for good and are mindful of the consequences that come with it.