Researchers find machine learning models still struggle to detect hate speech
Published on 6 Jan 2021
Detecting hate speech is a task even state-of-the-art machine learning models struggle with. That’s because harmful speech comes in many different forms, and models must learn to differentiate each one from innocuous turns of phrase.
Strictly Necessary Cookies
moove_gdrp_popup - a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.
This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.
Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.
This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.
Enabling this option will allow cookies from:
Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains
These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.
Please enable Strictly Necessary Cookies first so that we can save your preferences!