Jun 18, 2025

Data Science Master’s Students Build Tool To Help Visually Impaired People Navigate the Streets Safely

SafeWalk, by Master of Information and Data Science students Helen Hu, Ke Zhang, Kris Junghee Lee, and Ula Zhu, aims to help visually impaired people navigate the streets safely with timely audio alerts of nearby obstacles. The team was awarded the Hal R. Varian Capstone Award for Spring 2025. 

To learn more, we interviewed the team —

What inspired your project?

Kris: I have a friend with a visual impairment. He depends primarily on a white cane when going out to navigate the world, and the rapid rise of shared bikes and scooters has made walking more hazardous for him, with sidewalks becoming increasingly cluttered and unpredictable. 

At the same time, I’ve been fascinated by how far image processing and vision-language models (VLMs) have advanced. These technologies are evolving at an incredible speed, driven largely by the pursuit of convenience, efficiency, and automation in fields like self-driving cars and advanced robotics. 

But I began to ask myself: what if we redirected even a fraction of that innovation toward something more fundamental to empower individuals in their daily lives? That question shifted my perspective. I became convinced that such technologies could offer real value to those who need them most — people who are often left out of the innovation spotlight, like the visually impaired or those who face challenges in independent mobility. 

For many of us, a ten-minute walk outside is something we do without a second thought. But for some, even that short walk can feel unsafe. Imagine how meaningful it would be to turn that simple act into a safe, empowering daily experience. That, to me, is the kind of impact worth building for. I wanted to develop something simple, affordable, and accessible—something that makes daily life not just more convenient, but fundamentally safer. SafeWalk reflects that vision: using cutting-edge AI as an inclusive technology — to empower people of all abilities.

“That blend of technical training and human connection is what I found most powerful about the program and it’s what made this project so meaningful to me.”

— Ula Zhu

How did your I School curriculum help prepare you for this project?

Ula: The I School curriculum really shaped how I approached SafeWalk, both technically and from a human-centered perspective. Courses like DATASCI 207: Applied Machine Learning and DATASCI 281: Computer Vision gave me the tools to deploy the fine-tuned YOLO model on a device. I also found DATASCI 205: Fundamentals of Data Engineering to be helpful when it came to building efficient data pipelines. But what made the biggest impact was DATASCI 221: Modern AI Strategy and Applications. That class helped me think more deeply about the why behind what we built. It pushed me to consider not just model performance, but how to communicate value, build trust, and design for real people. It helped me frame SafeWalk not just as a technical project, but as something that empowers people — offering confidence, safety, and independence through thoughtful interaction design and storytelling. That blend of technical training and human connection is what I found most powerful about the program and it’s what made this project so meaningful to me.

Do you have any future plans for the project?

Kris: As we developed the prototype, we realized that achieving our goals would require solving several key challenges. One major issue  was that the physical characteristics of street obstacles vary significantly from country to country. This means that in order to fine-tune the YOLO model effectively, we need to gather sufficient, localized training images for each region and conduct additional training. We also discovered that providing real-time vision-language functionality — such as image descriptions and visual Q&A — requires a stable server infrastructure capable of handling API requests with low latency. 

“SafeWalk wasn’t just a technical exercise — it was a lesson in empathy, collaboration, and designing with purpose.”

— Ke Zhang

Moving forward, if we aim to officially launch the app, our first step will be to select a target country. Based on that, we will secure region-specific image data and re-train the model to suit the local environment. In parallel, we plan to explore partnerships or funding opportunities — especially with government agencies, public institutions, or socially-driven companies — to ensure stable and scalable server infrastructure.

Once this foundation is in place, we also plan to enhance the software with more specialized mobile development to support a wider range of devices, including iOS. This will allow us to implement the voice alert and obstacle notification features in a more flexible and responsive way across different platforms. Our long-term goal is to make SafeWalk a practical and inclusive tool that can be adapted to real-world environments and made accessible to people around the world.

How could this project make an impact, or, who will it serve?

Helen: SafeWalk is designed to serve individuals who may face challenges navigating public spaces, especially people who are visually impaired, elderly, or feel anxious walking alone in unfamiliar environments. By using real-time obstacle alerts and surrounding description features, the app can help users become more aware of their surroundings and make safer, more confident decisions while walking.

Importantly, SafeWalk offers a practical and accessible solution. Anyone with a camera-equipped smartphone can use the app without needing any specialized or expensive hardware. In the future, if additional support from hardware providers becomes available, we could lower the barriers to access even further, making SafeWalk more usable, scalable, and impactful for those who need it most.

Beyond direct users, I believe the project also sends a broader message about the role of AI in society. SafeWalk demonstrates that technologies typically reserved for cutting-edge domains can be redirected toward inclusive, human-centered solutions that enhance everyday safety.

Additional info to share?

Ke Zhang: We are incredibly grateful for the support and inspiration we received throughout the MIDS program. We’d like to highlight how diverse skill sets came together on this project, from computer vision and mobile development to user research and storytelling. SafeWalk wasn’t just a technical exercise — it was a lesson in empathy, collaboration, and designing with purpose.

Winning the Capstone award was an unexpected but deeply meaningful moment for all of us. It gave us a chance to reflect on how much we’ve grown,  not just as data scientists, but as problem solvers working toward something that matters.


Last updated: June 18, 2025