Skip down to main content

How we use cookies

We use cookies to help give you the best experience on our website. By continuing without changing your cookie settings, we assume you agree to this. Please read our cookie statement to find out more.

When the Algorithm Itself is a Racist: Diagnosing Ethical Harm in the Basic Components of Software

Published on
15 Oct 2016
Written by
Philip Howard

Computer algorithms organize and select information across a wide range of applications and industries, from search results to social media. Abuses of power by Internet platforms have led to calls for algorithm transparency and regulation. Algorithms have a particularly problematic history of processing information about race. Yet some analysts have warned that foundational computer algorithms are not useful subjects for ethical or normative analysis due to complexity, secrecy, technical character, or generality. We respond by investigating what it is an analyst needs to know to determine whether the algorithm in a computer system is improper, unethical, or illegal in itself. We argue that an “algorithmic ethics” can analyze a particular published algorithm. We explain the importance of developing a practical algorithmic ethics that addresses virtues, consequences, and norms: We increasingly delegate authority to algorithms, and they are fast becoming obscure but important elements of social structure.

Download here.

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2016). Automation, Algorithms, and Politics | When the Algorithm Itself is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal Of Communication, 10, 19. Retrieved from http://ijoc.org/index.php/ijoc/article/view/6182

Related topics