MIMS Final Project 2023

Egaleco: Advancing Fairness in Machine Learning (UX Research and Design)

Egaleco, meaning equality in the international language Esperanto, is a new fairness toolkit that aims to help machine learning practitioners in the healthcare sector easily and effectively identify bias in their models and perform context-appropriate fairness assessments. Egaleco provides ML practitioners with use-case-specific and policy-informed context to understand and articulate to non-technical stakeholders the importance of fairness and the results of a fairness assessment during ML model development.

The Egaleco User Experience (UX) team led the UX research, UI design, prototyping, and user testing efforts while collaborating with the Egaleco Product + Policy team to engineer the product and incorporate policy-informed educational content. Additionally, the Egaleco UX team created a design hub that documents key design decisions informed by our research and suggests recommendations for future toolkit evolution and development.

The Challenge

While even the ideal ML fairness toolkit will constrain what fairness means and the work of AI ethics (Wong et. al. 2022), ML fairness toolkits are an access point to the designers and decision-makers developing ML algorithms that are rapidly deployed with real impacts on humans. As AI systems, and their biases, increasingly proliferate every aspect of our lives (Merhabi et. al., 2022), the urgency for effective approaches to ensure fairness is critical and yet unmet.

Our Goals

Recent research highlights usability challenges in the leading ML fairness toolkits currently available. (Lee & Singh, 2021) Based on these evaluations, there are several key UX challenges this project aimed to further explore and address.

  • Help ML practitioners confidently and accurately assess the fairness of their models by choosing the most relevant fairness metrics for their use case.
  • Empower ML practitioners to explain and advocate for the importance of fairness to non-technical stakeholders.
  • Ensure the toolkit UX is easy to use and therefore easily adopted and integrated into ML practitioners’ workflows despite time constraints. (Deng et. al., 2022; Wong et. al., 2022).


  • A Figma "prototype for implementation" delivered early to the Egaleco Product + Policy team to guide the front-end of the toolkit.
  • A Figma "prototype for inspiration" that we were able to spend more time iterating and user testing without extreme time limitations.
  • A paper that documents in-depth our foundational research findings, design explorations, user-testing results, and design recommendations for future toolkit evolution and development.
  • A design hub that distills learnings from the research and design exploration phases of the project and summarizes our research-informed recommendations.


Deng, W. H., Nagireddy, M., Lee, M. S. A., Singh, J., Wu, Z. S., Holstein, K., & Zhu, H. (2022). Exploring how machine learning practitioners (try to) use fairness toolkits. arXiv Preprint arXiv:2205.06922.

Lee, M. S. A. & Singh, J. (2021) The Landscape and Gaps in Open Source Fairness Toolkits. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3411764.3445261

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6), 1-35.

Wong, Richmond Y., Madaio M., & Merrill M. (2022). Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics. iIn . ACM, New York, NY, USA, 21 pages. https://www.researchgate.net/publication/358687120_Seeing_Like_a_Toolkit...

Last updated:

June 16, 2023