MIDS Capstone Project Fall 2018

Melon AI: a sound-recognition app for the deaf and hard-of-hearing community

Approximately 5% of world population (or a staggering 466 million people) suffers from disabling hearing loss. We set out to create an impactful solution for this community that addresses some of their everyday needs. Our mobile application uses artificial intelligence to recognize key sound events of interest to this community such as car horns and baby where immediate alerts and continual logging is critical for the user.  While the deaf community has benefitted from innovation in the app space, up until now its been mostly in the areas of sound amplification and text to speech/speech to text.  This app is optimized for Android with low-latency so that it works in real-time for the user. 

The Melon AI app converts a sound wave (from the mic) into a melspectrogram image that serves as the main feature fed into a Convolutional Neural Network that will then classify the sound into one of eight classes. Average inference time is about 15 ms so the user never has to worry about missing a beat and the app can also be synced with a wearable device. 

More Information

Last updated:

October 1, 2019