Jul 5, 2023

Empowering Equity: Multidisciplinary Team Recognized for Algorithmic Fairness Application

Team ‘Egaleco’ Wins Sarukkai Social Impact Award

There’s a saying in the data science industry: if the data you have isn’t good, having more of it isn’t helpful.

In fact, data isn’t always as objective or accurate as it’s made out to be; it’s prone to biases, errors, and leaks that when used to train machine learning products (e.g. predictive AI), can hurt certain populations based on their race, gender, and other demographic characteristics.

To address this, 11 students from the Master of Information Management and Systems and Master of Information and Data Science programs came together to create Egaleco — a tool that helps data scientists assess their datasets for algorithmic bias and makes it easier to advance fairness in their use case. Egaleco has since won the May 2023 Sarukkai Social Impact Award, which was established by Sekhar and Rajashree Sarukkai to recognize projects with the greatest potential to solve important social problems.

“Fairness and transparency in AI is the central issue that will define if AI can live up to the promise of significant positive societal impact. Egaleco tackles this problem head-on with a necessary depth-first approach in mapping fairness metrics to a specific industry sector — healthcare,” the donors noted. “This kind of work is precisely what this award aspires to encourage — projects that can foresee such human and societal issues to help clear the path for technology to realize the promise of positive societal impact.”

The Beginning

It all began with a Slack post. 

Prior to working on Egaleco, MIMS students Gurpreet Kaur Khalsa and Mudit Mangal were individually inspired to explore the field of algorithmic fairness after working with I School Professor Deirdre Mulligan and Professor Zachary Pardos of the School of Education, who has also taught at the I School. Eventually, the two students joined forces and began researching, reading recent papers and speaking to industry experts to learn about existing toolkits. Interested in recruiting members, they posted to Slack and found MIDS student Allison Fox, who had experience researching these tools. 

From there, the team grew. Fox recruited fellow MIDS students Rex Pan, Carlie McCleary, and Hilsenton Alcee to lead data and development. Khalsa and Mangal added MIMS students Orissa Rose and Alan Kyle to their product and policy roster; Mary Grace Reich, Han Yang, and Jennifer Chan then joined as UX specialists to round out the team. 

“Fairness and transparency in AI is the central issue that will define if AI can live up to the promise of significant positive societal impact.”
— Sekhar Sarukkai

With such a large group, it was crucial to define the roles and responsibilities early on. “Tools like Zoom, Slack, Notion, and Figma played a big role in facilitating collaboration,” Khalsa explained, “Knowing that coordination was likely going to be our biggest challenge, we invested a lot of time in the beginning to ensure that we had clear means of documenting and sharing information, tracking deadlines, and an understanding of how the pieces all came together towards the bigger vision.”

“The teams collaborated very closely throughout the semester, including through biweekly meetings with all team members, exchanging deliverables for feedback, and sharing individuals’ expertise and strengths across teams as needed,” Reich, Yang, and Chan added.

The Product

Their collaboration culminated in Egaleco, which the team hoped would help the healthcare industry ensure the fairness of their AI systems. Fairness is particularly important in this sector because machine learning services many areas within the field, from coverage and diagnostics to delivery and patient outcomes. In these applications, unfairness in models could be a matter of life or death. 

“Thus, we built Egaleco’s tooling and operationalization resources for healthcare partners that care about equitable access to care for communities who’ve been historically underserved by the healthcare system,” Rose commented. 

The team credited various classes for their success, such as Assistant Professor Morgan Ames’s INFO 203: Social Issues of Information, where they were challenged to consider “what we know we must support socially and what we can support technically.”

“While creating Egaleco,” Chan said, “We strived to create a tool for social good while also being aware of technical limitations imposed by the [aforementioned] sociotechnical gap.” 

They also thanked Professor Marti Hearst’s INFO 247: Information Visualization and Presentation, Professor Mulligan’s coursework on tech law, and Professor Niloufar Salehi’s Human-Centered AI for providing them with the skills to work through the project.

Next Steps

Multiple members have expressed interest in continuing their journey with Egaleco post-graduation. Rose will be leveraging her capstone experience into future work. “Our Egaleco research demonstrated that [...] industry partners need help operationalizing algorithmic fairness practices,” she stated. “True to the interdisciplinary nature of our team and the I School, I consult clients and merge technical, legal, and social perspectives to advance the use of AI for good.”

Similarly, Fox, Mangal, and Khalsa plan to continue collaborating on a product. “Our capstone work revealed that there’s a large market gap in the algorithmic fairness space, and we’re excited to use our shared product management, data science, and project management expertise to build a startup that can disrupt this space,” added Fox. 

Interested in learning more about the project? Contact the team via hello@egaleco.ai.

three headshots of students
Egaleco UX team (Jennifer Chan, Han Yang, and Mary Grace Reich)
four pictures of students
MIDS Product Team (Hilsenton Alcee, Allison Fox, Carlie McCleary, and Rex Pan)
four pictures of students
MIMS Product & Policy Team (Orissa Rose, Alan Kyle, Mudit Mangal, and Gurpreet Kaur Khalsa)

Videos

2023 MIMS Final Project Team Egaleco Video

Egaleco - Advancing Fairness in Machine Learning

If you require video captions for accessibility and this video does not have captions, click here to request video captioning.

Last updated:

August 10, 2023