About
Alex is a postdoctoral researcher at the Oxford Internet Institute, whose work in computer science has focused on developing methods for machine learning and statistical estimation which satisfy formal privacy guarantees.
Privacy frameworks such as differential privacy achieve their guarantees by restricting the amount of information derived from any individual input. Variants of such “information bottlenecks” may be applied as practical tools towards goals beyond privacy and serve as a unifying theme of Alex’s more general interests. While continuing to explore the privacy dimensions of this class of techniques, he is interested in what other problems they can solve.
Such techniques may be used as a tool to protect copyright, ensuring that an algorithm learns from examples without merely copying. They may also be used to obtain meaningful representation of complex data and to guarantee that machine learning algorithms generalize to unseen examples. What is the relationship between these different objectives? What are the synergies and tensions between them? Which aspects of these objectives are not captured by the concept of information bottlenecks, and how might they be expressed instead?
Alex previously completed his PhD in computer science at the University of Toronto.
Research Interests
Machine Learning, Differential Privacy, AI Safety, Information Theory, Semantic Representation