Calls for ‘ethical AI’ are legion, from the OECD to the European Parliament, to Microsoft and the Pope. The main problem with existing initiatives to shape AI applications is that they are high-level, which can and unenforceable, which means that organisations are largely left to interpret and enact such ethics themselves. This has left workers exposed to potential risks and abuse of AI technologies.
The pathway from high-level principles to enforceable regulation around the impact of AI on working conditions has not been clearly defined. The focus on ethics can be used as a way of getting around regulation, especially when technology companies opt for voluntary codes of practice that they’ve shaped themselves.
We need to move from principles to processes that provide mechanisms to ensure and enforce compliance. This entails understanding how AI systems are already shaping working conditions and how we can ensure that AI is used to foster decent and fairer work. In order for ethics to matter at work, indeed to ensure AI is used in ways that promote fair work, there are certain criteria that need to be met. These include:
Regular engagement, multiple external and internal stakeholders.
Mechanisms for independent oversight
Transparency around decision-making procedures.
Justifiable standards based on evidence.
Clear, enforceable legal frameworks and regulations.
The OECD’s five principles for responsible stewardship of trustworthy AI are an important starting point for AI accountability in the workplace. This project proposes to further refine and apply such principles while developing processes through which they can be operationalised to facilitate fairer working conditions.
In order to measure best practice, we have to first refine the benchmarks that we are measuring practices against and the processes by which we are measuring. Despite the proliferation of high-level principles for AI ethics, there are no agreed-upon specific standards for fair, decent, or just work outcomes in workplaces in which human workers work in tandem with AI. The proposed research will address this gap. Building on the OECD principles as benchmarks to, our immediate aims are to determine AI best practice regarding working conditions. This will entail a set of AI fair work principles and operationalizable processes through which they can be applied, measured, and evaluated in any workplace.
As AI looks to enter ever more labour processes, the longer-term outcomes will therefore be an avoidance of harms, and fairer outcomes, for workers. As technologies tend to be path-dependent embedding a set of concrete principles and measurable processes from the outset of the process of technological diffusion is an important way to control their social effects. This project will set the agenda for the longer term outcomes and allow for future expansion of the project through engaging with industry, labour, and governments around the world.
There is an increasingly slippery use of bossware technology. Employers are delegating serious decisions to algorithms – such as recruitment, promotions and sometimes even sackings.
Fairwork highlights best and worst labour practices in the platform economy. Our goal is to show that better, and fairer, jobs are possible in the platform economy and low pay, precarity, and poor working conditions.
An OII Micro-site
Related Topics
Privacy Overview
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
moove_gdrp_popup - a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.
This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.
Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.
YouTube and Vimeo
This website uses the following additional cookies from third-party websites:
YouTube tracks the YouTube videos you watch that are embedded on our web pages.
Doubleclick monitors the adverts you see on YouTube. This cookie is automatically added by YouTube, but the OII does not display any adverts.
Vimeo tracks the Vimeo videos you watch that are embedded on our webpages
These cookies will remain on your computer for 365 days, but you can edit your preferences at any time through the "Cookie Settings" in the website footer.
Please enable Strictly Necessary Cookies first so that we can save your preferences!
Google Analytics
This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.
Enabling this option will allow cookies from:
Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains
YouTube - owned by Google. The cookie will track the OII videos that you watch on our site. This option will not allow cookies from doubleclick.net, however.
These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.
Please enable Strictly Necessary Cookies first so that we can save your preferences!