Project role: Researcher
Claire Leibowicz is a DPhil candidate and the Head of AI and Media Integrity at the Partnership on AI. Her research is generously funded by the OII Shirley Scholarship.
Full project title: Seeing and believing: Understanding multi-stakeholder AI governance toward responsible synthetic media and visual information
Life in the 21st century is filled with digital images. We consume the news through YouTube videos, keep in touch via Instagram images, and use images on camera smartphones to sway public opinion. For instance, in May 2020, a smartphone video depicting George Floyd’s murder prompted waves of racial justice protest in the United States.
New information and communication technologies have enabled a deluge of visual information, thereby bolstering imagery’s power to impact our beliefs, social lives, notions of historical truth, political awareness, and even well-being.
But while the ubiquity of digitized, visual experience has taken shape as the dominant way for individuals to consume what’s happening in the world, so too have we seen the emergence of new methods for visual manipulation. Artificial intelligence powers many such techniques.
Known as synthetic media, though commonly described by the more specific term “deepfakes” a portmanteau of “deep learning,” a subset of machine learning in AI and “fake” visual representations of real people and situations, these methods could enable someone to produce a wide range of media towards a variety of ends – some prosocial, some harmful, and some up for debate.
Overall, this research project will examine how diverse stakeholders can affect AI governance for synthetic media. This work will include stakeholders from across domains, disciplines, and geographic regions to reveal challenges and opportunities for governing synthetic media, and the tactical ways in which AI policymaking can be affected by multistakeholder input.