For the I School Community

Ph.D. Research Reception

Wednesday, March 8, 2023
3:40 pm to 7:00 pm PST

Ph.D. Research Reception: Selected Presentations (March 8, 2023)

Ph.D. Research Reception: Selected Presentations (March 8, 2023)

Join us as Ph.D. students from the School of Information share their innovative research.

The Ph.D. program at the School of Information draws doctoral students from a wide array of disciplines whose interests and approaches are as varied as their backgrounds. Though they all take technology as their object of study, our Ph.D. students approach the topic from many different angles — economic, political, social, legal, ethical — in an effort to understand the present impact and future development of information technology.


3:40–3:45 pm Opening Remarks
3:45–4:05 pm Elizabeth Resor
4:05–4:25 pm Justin Norman
4:25–4:45 pm Suraj R Nair
4:45–5:05 pm Sijia Xiao

5:05–5:30 pm


5:30–5:50 pm Naitian Zhou
5:50–6:10 pm Simón Ramírez Amaya

6:15 pm



Simple Scores Are Messy Signals: How Users Make Sense of Neighborhood Scores on Real Estate Platforms

Elizabeth Resor

This presentation will draw on interviews with people who used real estate platforms to search for housing in Oakland, CA and Las Vegas, NV. On these platforms users encountered numerical scores and shaded maps that rated the schools, walking, transit, noise and safety of the neighborhood associated with a listing. Despite the seeming simplicity of these scores, users drew a wide range of insights from them, and many expressed conflicting feelings about how to incorporate the scores into their housing decisions.

An Interdisciplinary Framework for Evaluating Deep Facial Recognition Technologies for Forensic Applications

Justin Norman

Much has been written about flaws in facial recognition, particularly in terms of gender and racial bias. With facial recognition systems seeing widespread use in law enforcement, it is also critical that we understand its accuracy, particularly in high-stakes forensic settings.

While precision and recall can be a reasonable way to assess the overall accuracy of a recognition system, an often overlooked aspect of these measurements is the composition of the comparison group.

For example, a high precision may be relatively easy to achieve if the person X has highly distinct characteristics (age, race, gender, etc.) with regard to the other people in the dataset against which they are being compared. On the other hand, the same underlying recognition system may struggle if person X shares many characteristics with the comparison group.

Alternatively, most facial recognition systems make strong assumptions about the image quality, pose, obfuscations present and sizes of images presented as both source images and datasets for comparison. In the real-world there are often dramatic variations in all of these variables for any given set of images.

In the classic eyewitness setting, a witness is asked to identify a suspect in a six-person lineup consisting of the suspect and five decoys with the same general characteristics and distinguishing features (facial hair, glasses, etc.) as the suspect. We propose that a similar approach should be employed to assess the accuracy of a facial recognition system deployed in a forensic setting. This approach will ensure that the underlying facial recognition task is similar regardless of the differences or similarities between the probe and comparison faces. This allows for a more proper determination of accuracy (and thus feasibility or suitability) of the model for use in real-world, high-impact use cases.

Humanitarian Aid: A Pathway to Financial Inclusion?

Suraj R. Nair

This project examines the potential for humanitarian aid, delivered via electronic transfers, to expand access to basic financial services. Focusing on a humanitarian aid program in Togo, we show that short-term aid can increase account ownership and the usage of the wider mobile money ecosystem among beneficiaries, and contacts in their social networks.

Empowering Online Harm Survivors through a Sensemaking Process

Sijia Xiao

Interpersonal harm is a prevalent problem on social media platforms. Survivors are often left out of the traditional content moderation process and face uncertainty and risk of secondary harm when seeking outside help. Our research aims to empower survivors in a critical and early stage in addressing harm --- the sensemaking process. we developed SnuggleSense, a tool that empowers survivors by guiding them through a reflective sensemaking process inspired by restorative justice practices. Our evaluation found that SnuggleSense empowers participants by expanding their options for addressing harm beyond traditional content moderation methods, helping them understand their needs for restoration and healing, and increasing their engagement and emotional support in addressing harm for their friends. We discuss the implications of these findings, including the importance of providing guidance, agency and information in survivors' sensemaking of harm, as well as the potential for the reflection process to have an educational effect within a community. We also reflect on our design process and provide insights for future design, for example, through offering a multi-stakeholder perspective, platform support for survivors' sensemaking, and balancing the provision of agency and informational support.

What Do You Meme? Modeling the Meaning of Internet Memes

Naitian Zhou

Internet memes encode meaning through the composition and interaction of meme templates and their fills. In this project, we try to learn semantically meaningful representations of meme templates. This allows us to ask questions about why people choose certain templates and what memes can tell us about the values and identities of online communities.

Credit Scores That Prioritize Customer Welfare: Theory and Evidence from Nigeria

Simón Ramírez Amaya

At the core of many consumer lending decisions is a credit score: an algorithmic assessment of a customer's creditworthiness. Traditional credit scores are designed to maximize lender profits, and use machine learning algorithms to predict which customers will repay loans. This paper proposes and tests a different paradigm for consumer lending, in which `welfare-sensitive' credit scores allow the lender to balance expected profits against the expected welfare impacts of specific loans. Using data from a randomized control trial in Nigeria, we show how machine learning algorithms can be trained to predict the welfare impact of lending to a client, and how those welfare scores can be combined with traditional credit scores to characterize a Pareto-efficient tradeoff between welfare and profits. Our main result suggests that, in the Nigerian context, the lender could achieve an 11% gain in consumer welfare by sacrificing 0.1% of profits.

If you require video captions for accessibility and this video does not have captions, click here to request video captioning.



If you have questions about this event, please contact Inessa Gelfenboym Lee.

Profile profile for igelfenboym

Inessa Gelfenboym Lee
Assistant Director of Student Affairs
102 South Hall

Last updated:

March 22, 2023