MIMS Final Project 2022

Algorithm Unwrapped: Sense-Making Tools for Algorithmic Content Harms

The Problem

Every day, whether you are aware of it or not, algorithms are deciding what you watch, listen to, and consume. This is the world of algorithmic content recommendations: content that is recommended to you by algorithms, often based on your digital activity and what they infer about your background, demographics, and interests. In this process, however, users open themselves up to content that they never chose–content that can be harmful, that targets their impulses and insecurities, and that can slowly but surely alter their beliefs, desires, and preferences.

We call this algorithmic content harm: the psychological, social, physical, or other harms experienced by someone while they are interacting with (or as a result of) content that is algorithmically recommended to them. Algorithmic content harm can be hard to capture. Because each video passes by in a matter of seconds, it is hard to identify how tens of thousands of videos can add up over months to build unhealthy patterns of thinking or affect their behavior, identity, and mental health in negative ways. 

To allow people to identify this type of slow violence for themselves, a “demystifying” of the algorithm needs to occur in a way that is relatively accessible. Casual users often do not understand how their experience online is shaped via algorithm, choice architectures, and targeted marketing. For users to effectively resist algorithmic harm and reclaim a sense of agency, education about this problem needs to occur first.

    The Solution

    Our project TikTok Un-wrapped is a set of machine learning education tools to help social media users better understand their own algorithmic content feeds, make sense of how it affects their emotions and identity, and reflect on what they really want out of their algorithmic content recommendations.

    Our tools include:

    1. An educational zine for the general public to learn about algorithmic content recommendations and their potential harms, with sensemaking activities throughout to help readers actively consider the impacts of their own TikTok feeds
    2. A data toolkit to help individuals download their own TikTok data, extract their view history, scrape hashtags, and explore this data to better understand their trends over time

     

    algorithm unwrapped
    sensemaking for online content harms
    TikTok Unwrapped Zine Cover
    TikTok Unwrapped Zine Cover
    Research participant experiencing journaling activity that asks them to notice how they feel after watching each of the five TikToks on their For You page.
    Research participant reads through zine.
    Zine featuring chapter 2, on types of algorithmic content harms.
    Nicole Chi, Joanne Ma, and Alex Gao pictured in MIMS Regalia in front of South Hall.
    Nicole Chi, Joanne Ma, and Alex Gao in front of South Hall.

    Last updated:

    June 3, 2022