Google’s algorithm for detecting hate speech is racially biased
External Link (Opens New Tab)
When the researchers then tested Google’s Perspective, an AI tool that the company lets anyone use to moderate online discussions, they found racial biases. Whether language is offensive can depend on who’s saying it, and who’s hearing it, but AI systems do not, and currently cannot, understand that nuance.
Date Published
Topics
Length
Material Type
Source