Full project title:
Producing cross-disciplinary insights for diverse and inclusive design to AI in arts and fashion
Practice-based approaches to AI – in which artists, fashion designers, and computer scientists experiment with the inherent limitations of algorithmic reasoning – allow us to reﬂect on the social experiences mediated by technology. For instance, artists are interrogating AI to explore algorithmic reasoning, such as bias in the training data concerning gender and sexual identity (Jake Elwes and Edinburgh Futures Institute, 2020).
While practice-based insights can highlight the limitations of AI systems, it is unclear how important issues of diversity and inclusivity can be operationalised to inform the regulation of these tools and practices. Given the speed of advances in AI techniques, including the rise of Generative Adversarial Networks and Large Language Models for the generation of content, the need to delimit “harmful” and “non-harmful” uses requires a multidisciplinary approach to the design, use and regulation of these technologies.
The project will draw on two workshops involving (1) legal scholars, social scientists, and AI ethicists; and (2) artists, fashion designers and computer scientists, to produce a training guide for artists, fashion designers, and computer scientists engaging with practice-based approaches using AI. It will examine how practice-based approaches can reveal regulatory tensions concerning algorithmic bias, and how a socio-legal approach to AI can feed back into practice-based approaches, to promote diverse and inclusive design of AI systems.