got_meals_2_0.png
MIDS Capstone Project Spring 2024

Got Meals?

Problem & Motivation

"What's for Dinner?" is a pressing question in today's world, contributing to stress, frustration, and food waste. In the United States, food waste is estimated at between 30-40% of the food supply (USDA).

Data Source & Data Science Approach

Our solution harnesses computer vision technology to analyze the contents of users' refrigerators and pantries. By uploading photos of their ingredients, our application leverages an efficient net architecture to identify food items from the image files (JPEG or HEIC) via an API. The identified ingredients are then cross-referenced with a vast database of recipes using Elasticsearch to provide personalized meal suggestions tailored to the user's dietary preferences and restrictions.
 

The application follows a modular architecture, starting with label training data and image files being inputted into an efficient net architecture running on Minikube and Docker. The labeled images are then displayed on a Streamlit interface. Streamlit returns a list of identified ingredients, which is shared via an API and inputted into Elasticsearch. The search parameters are then used to query the recipe database, and the recommended recipes are returned as a JSON file, which is presented on the Streamlit interface.

Recipe Database

Leveraged the RecipeNLG dataset of 2.2 million recipes and 270,000 ingredients. Lemmatized words to increase match likelihood (e.g., "apples" → “apple”). Focused efforts on an image model identifying the top 100 ingredients, as EDA revealed a long tail of "edge" ingredients.

Image Classification

Used seven food image datasets totaling 31,654 images initially. Addressed class imbalance (e.g., 4,874 apple images vs. 27 fish sauce images). Training size: 13,800 images. Test size: 6,330 images. Validation size: 2,070 images.

Key Data Pipeline Features 

Adapter layer for integrating new datasets into a standardized format. Class unification layer mapping image classes to a standard list. Multi-processing image pipeline preparing raw data for training based on input type (format conversion, denoising, scaling, cropping) at ~30k images/hour on desktop hardware. Sampling layer adjusting for class variances to create training datasets.

Evaluation

EfficientNet

  • Features: We are using raw pixels as features for our model.
  • Most Influential Factors for Feature Performance:
    • Quality of scaling to our target input image size use skimage Bi-Quintic Scaling
    • Target Image size close to the median image size to reduce scaling effects.Target size (224, 224); Median Size (268, 229)
  • Model Architecture Selection: To gauge constrained environment performance, we traded off core architectures with the same classification layer setup. We selected the EfficientNetV2M core model and evaluated different "top" configurations (dense layers, batch normalization, dropout, activations).
  • Model Performance: 94% accuracy with only 200 images / class 
  • Real-world Assessment: In "lab and wild" testing, the model struggles with lower-quality data and packaged items but performs well on fruits, veggies, and unpackaged items.

ElasticSearch

  • Search Scenarios: Easy - "chicken," "cheese". Medium - "trout," "lemon," "apple". Hard - "beef," "potatoes," "asparagus," "mushrooms," excluding "tomatoes". Returned top 5 results and calculated average scores.

Key Learnings & Impact

Data Pipeline: Fast, flexible, extensible, efficient. Consumes multiple data formats via standard configs.

Model Training: ImageNet weights and augmentation enabled good accuracy with minimal data per class. Transfer learning and fine-tuning improved performance. Different efficient net head configs captured more generalizable info to boost performance.

ElasticSearch: Lemmatizing increased ingredient match rates. "Boost" feature enabled incorporating user preferences.

Future Improvements

More granular class structures mapping N predictions to one ingredient. Larger "in-the-wild" training datasets. Multi-phase prediction (UPC recognition then ingredient classification). Explore aiding low-income families. Implement front-end image segmentation.

Impact

Streamlines meal planning to reduce stress and food waste. Promotes sustainable food practices for environmental benefit.

Acknowledgments

Our team is extremely grateful for our professors, Joyce and Todd. We would not have gotten this far in our project without their strong support and encouragement.

got_meals.png

Video

Got Meals? Demo Video

Got Meals? Demo Video

If you require video captions for accessibility and this video does not have captions, click here to request video captioning.

Last updated:

April 16, 2024