Professionals tasked with preserving online security hope to use new machine-learning based techniques to develop a “fairer” system to determine patterns of “good” and “bad” usage, moving beyond regional blocking. However, we argue that these systems may continue to embed unequal treatments, and troublingly may further disguise such discrimination behind more complex and less transparent automated assessment.