By Gerrit De Vynck and Jeremy Kahn
YouTube has tried to keep violent and hateful videos off its service for years. The Google unit hired thousands of human moderators and put some of the best minds in artificial intelligence on the problem.
On Thursday, that was no match for a gunman, who used social media to broadcast his killing spree in a New Zealand mosque, and legions of online posters tricking YouTube’s software to spread the attacker’s video...
"Once content has been determined to be illegal, extremist or a violation of their terms of service, there is absolutely no reason why, within a relatively short period of time, this content can’t be eliminated automatically at the point of upload," said Hany Farid, a computer science professor at the University of California at Berkeley’s School of Information and a senior adviser to the Counter Extremism Project. "We’ve had the technology to do this for years..."
Hany Farid is a professor at UC Berkeley with a joint appointment at the School of Information and in the EECS department.