What does it mean to measure risk on digital platforms, especially for those most vulnerable to online harm? Marisa Hall, a student in UC Berkeley’s Master of Information and Cybersecurity (MICS) program, is tackling this question head-on with research that quantifies and addresses platform risk for marginalized users, in particular LGBTQ+ youth.
Hall first noticed a series of disturbing trends, with an uptick in 2020, that triggered their research. Hall saw that in recent years, there was a significant increase in anti-LGBTQ+ bills being introduced and passed; they also saw the social media landscape turn more volatile. For instance, Meta allowed LGBTQ+ users to be labeled as “mentally ill,” and YouTube removed gender identity from their list of protected categories. These new measures, combined with an uptick in community-seeking behavior online since the start of the pandemic, have created a dangerous combination impacting vulnerable youth.
“In 2023, only 21% of queer and trans youth who reported abuse online saw any action taken by the platform,” Hall said. “Now, there is this floodgate that has opened around what’s allowed on platforms without any moderation, which is really alarming, dangerous, and has real world consequences.” In an annual report published by the Trevor Project, it’s noted that states passing anti-trans laws see a significant increase (up to 72%) in suicide attempts among transgender and gender non-conforming teens.
Enter: platform risk quantification and mitigation. Platform risk is defined as inherent risks in relying on specific technology platforms; for example, many people who use social media sites are in danger of doxxing, cyberbullying, and more. By utilizing Mozilla’s Rapid Risk Assessment methodology, Hall built a formula that could help social platforms calculate how vulnerable a population is on their site using demographic data and take the appropriate steps to address these risks.
Through this framework, Hall identified a list of possible risks that the average LGBTQ+ user faces, ranging in severity from phishing and account takeovers to loss of life. Many of these risks, they noticed, resulted from a lack of stringent guardrails against harassment and bullying, in addition to a lack of digital security literacy among users.
“If subsets of users are more vulnerable on your platform, technical controls should account for that,” Hall said. “Our responsibility as security professionals is to connect the dots between what is happening in our world and be proactive around how things could play out online as technology becomes more integrated with our daily lives.”
To protect these vulnerable users, Hall proposed some recommendations to make the platforms more secure-by-design and prioritize privacy. For example, they suggest that social media sites begin by improving privacy controls by hiding sensitive information by default, establishing explicit policies against LGBTQ+ targeted abuse, partnering with LGBTQ+ organizations, building more user-friendly reporting mechanisms, and more. To achieve these goals, Hall argues that there is a dire need for robust trust and safety teams, bias auditors for AI, and community liaisons to take charge in continuously monitoring for risk and responding to new threats as they appear.
These new implementations could also benefit platforms themselves, Hall noted. Research shows that countries with protections for LGBTQ+ communities have better performing economies on average, and Hall believes this could apply to social media companies as well. Building a more secure-by-design platform, they added, could help decrease online harassment, which has been shown to significantly decrease engagement and impact advertising revenue and platform valuations.
“I decided to do this research because I’m a queer, gender expansive person who felt really supported growing up when and where I did,” Hall said. ”That’s such a privilege, but I think that being a queer kid right now can be a particularly volatile experience and is something that I care really deeply about showing up for.”
“The convergence of hostile political climates, technical acceleration, and platform governance failures has created a critical moment for action. Technology leaders, policymakers, and civil society must work together to ensure that digital spaces become truly safe and empowering for all users, starting with those who need protection most.”
Hall has been invited to present their research at the Stanford Trust & Safety Conference, which will be held from September 25-26, 2025, and looks forward to sharing their work with industry experts and leaders interested in creating change within their organizations.
Hall gives special thanks to cybersecurity lecturer Tiffany Rad, who inspired them to continue researching this topic and provided support. Hall is open to collaboration and encourages anyone curious about implementing this methodology or learning more about this research to reach out via LinkedIn.
