Skip down to main content

AI @ Work: overcoming structural challenges to ensure successful implementation of AI in the workplace

Published on
13 Aug 2020
Written by
Gina Neff
Report cover: AI @ Work

News reports about artificial intelligence overwhelmingly showcase how AI is supposed to work. Yet in reality, there is much more work to be done to ensure safe, fair and effective systems that function for workers and in workplaces in daily life.

In our new report, AI @ Work, co-authored by Professor Gina Neff, together with doctoral candidates Nayana Prakash and Maggie McGrath, Oxford Internet Institute, we show where serious gaps still remain in the social and technical infrastructures required for effective AI in many workplace settings and offer a series of recommendations on how to address these gaps.

Existing infrastructure challenges

Our analysis of over 400 English-language articles worldwide from 2019 and 2020 sets out the common challenges facing companies and their workforces which broadly fit within three key themes: challenges of integration, when workplaces are workers are not yet prepared for AI use; of either too much or too little reliance on AI systems in practice; and of transparency about where the work of AI systems happens and who does that work.

Taken together, these challenges mean that the AI systems used in workplaces are frequently ineffective, simplistic and opaque. For workers, AI systems at their workplaces are generating new problems, further obscuring the work that they do, and using new forms of surveillance of their work as data inputs for new products and services.

Common examples of ineffective AI in the workplace include:

  • Medical AI diagnostic systems reject images taken under normal clinical settings or deliver results that reflect where diagnostic images were recorded, rather than patients’ conditions. These systems produce results that may become quickly outdated because of rapidly-changing medical knowledge and clinical best practices.
  • Companies market services as automated and AI-enabled, when the work is actually performed by overseas human labourers.
  • Companies use their workers’ data to train algorithms, turning their employees’ daily tasks into new forms of algorithmic labour.
  • Global supply chains necessary for AI hide where value is generated, extracting cheap labour and data from a Global South that enriches the Global North and echoing earlier forms of colonial exploitation.

Recommendations for effective AI in the workplace

In our report, we recommend that the companies designing these systems and the companies that deploy them be clear and open with what data is being used and how, that they work to manage expectations of clients and users, and that they enable people to ask questions and critique the AI systems that they work with.

For news organisations, scholars, and activists, we suggest expanding how we talk about AI in society, taking care not to oversell the promises and undersell the shortcomings of the systems currently in use and expanding the sets of stories that we tell about how today’s AI works in practice.

For society, we urge that we expand people’s ability and readiness to critically question AI systems and interrogate how they fail in practice so that workers and workplaces can be better prepared to fix shortcomings and problems.

Ultimately, serious gaps remain between the social and technical infrastructures required for functional AI in many workplace settings. Until these gaps of integration, transparency and reliance can be solved by workers in their workplaces, AI tools and technologies will continue to demonstrate serious—and sometimes dangerous—shortcomings in practice.

 

The AI @ Work report was written OII DPhil students Nayana Prakash, Maggie McGrath and Professor Gina Neff and published in conjunction with Future Says_, an initiative of the Tech Impact Network, supported by the Minderoo Foundation. A more accessible version of the report is also available for users with visual impairment.

Related Topics