Jul 1, 2020

A Restorative Justice Approach to Social Media Moderation

Niloufar Salehi awarded NSF grant

The internet was built on the idea of free-flowing information and open discourse. In reality, though, online tools are used as weapons to harass, hurt, and demean others, stifling voices and viewpoints. 

Assistant Professor Niloufar Salehi has an idea that might help address online harm: she’d like to see if the restorative justice process can be implemented successfully in online communities. 

The National Science Foundation just awarded her a Computer and Information Science and Engineering (CISE) Research Initiation Initiative (CRII) grant to study just that. The NSF CRII Award, often referred to as a “mini CAREER Award,” is a prestigious, highly competitive grant awarded to assistant professors at the beginning of their academic careers.

Restorative justice is an approach to justice that focuses on the harm, centers the needs of victims, encourages offenders to take responsibility for their actions, understand the harm they’ve caused, and when necessary conditions are met offers a path for redemption. One key assumption is that those most affected by the harm should have the opportunity to become actively involved in repairing it. Used in real-life settings like schools, it provides a structured format in which victim and harmer face each other and address the behaviors that may have led to harm.

Restorative justice principles might be used to improve or replace the methods of online moderation now used in both commercial moderation (implemented in a top-down fashion by the owners of sites like Facebook or Twitter) and community-based moderation (carried out by volunteers from the communities themselves, as on Reddit or Wikipedia).

“One of the hopes I have for the project is to really dive deeper into types of harm online — revenge porn, stalkers — to try to understand what processes and tools would be useful to strive for justice.”
— Niloufar Salehi

There are flaws in both systems: in commercial moderation, algorithms may be significantly biased depending on how they were designed and trained, and they can also be manipulated. In the community model, there can be a lack of transparency; members who are banned wind up confused and unable to right any wrongs (perceived or real), and the communities aren’t offered the opportunity to address what is usually a systemic problem. 

“We’re imagining how the (moderation) roles will be redesigned if we use the values of restorative justice, which aims to repair harm,” Salehi says.

Salehi’s project will be divided into two phases. Phase One will involve gathering and analyzing data from victims of online harassment, moderators, and restorative justice practitioners in order to better understand viewpoints about online abuse, learn how to engage all stakeholders fairly, and ultimately ask the question of what implementing restorative justice principles might look like online. In Phase Two, she’ll use the data collected to create a series of online participatory design workshops where interviewees from Phase One will be presented with abusive scenarios and will discuss what appropriate action plans might look like, what values they contain, and what tools might be used to implement them. The end result will be a set of principles, plans of action, and prototypes for a fictional platform with a tiered system of moderation that mirrors restorative justice practices. 

“One of the hopes I have for the project,” Salehi says, “is to really dive deeper into types of harm online — revenge porn, stalkers — to try to understand what processes and tools would be useful to strive for justice.” 

It might start with some conscious communication. As Salehi says, “In restorative justice, communication is the main tool for action.”

Last updated:

June 30, 2020