Banner
MIDS Capstone Project Spring 2025

YoloMerlot: Low-Cost Early-Season Yield Prediction Tool for Vineyards

Problem Statement

Vineyard operators struggle with manual, time-consuming, and costly methods for estimating grapevine yield, while existing commercial solutions require expensive equipment. Our project leverages YOLO-based computer vision to provide an affordable, smartphone-enabled tool that accurately predicts yield by simply analyzing recorded videos, optimizing productivity and reducing operational costs.

General Overview

YoloMerlot is a low-cost, early-season yield prediction tool that helps vineyard operators optimize grape production by replacing manual, error-prone yield estimation with an ML-driven solution. Utilizing computer vision, our platform processes smartphone-recorded videos to detect and track grape clusters, providing accurate yield predictions without expensive equipment. Users can easily upload videos, manage farm data, and obtain precise yield insights, reducing costs and improving decision-making. The data pipeline integrates video processing, object detection, and DeepSORT tracking, with extensive data augmentation and custom classifiers to enhance accuracy.

Value Proposition

Our key differentiation is that we aim to build a solution that is low-cost and reduces the barrier for entry on vineyards to take advantage of our solution. By utilizing smartphone cameras, we are empowering anyone to be able to take advantage of our product.

How does it work?

In our project, we used three different models. The video that is uploaded is processed frame by frame. The first YOLO model detects the grape flower clusters and provides the bounding boxes to the next classifier model. The YOLO model is optimized for recall to ensure that as many of the flower clusters are detected as possible. The second classifier model is trained on computer vision features (color histogram, edge density, local binary pattern and discrete wavelet transform) focused on optimizing accuracy. This would help improve the overall model for recall and accuracy. The filtered bounding boxes from the classifier are passed to a third DeepSORT model to track flower clusters across the video frames to obtain a final cluster count.
 

Where is the data from?

The data is sourced from Efficient Vineyard's Roboflow platform. The four major datasets used are Day and Night Flower Clusters, Night Flower Clusters, Flower Clusters, and Grape Buds. Additionally, we incorporated manually labeled data from Emerald Slope Vines, curated by our team member Jake Brinkerhoff.

Key Learnings

Low-cost computer vision yield prediction is a feasible solution, showing promising early results. Our best detection model achieved a recall of 72%, while the classifier reached 90% accuracy within a single image group. However, more work is needed to generate larger and more consistent datasets, which could improve both recall and accuracy to the levels needed for deployment. Model performance is highly sensitive to factors such as vine or trellis type, growth stage, image quality, and lighting conditions. As such, a universal model may not be suitable for production use—instead, multiple models tailored to specific scenarios may be necessary. Despite these challenges, using video-based input offers significant time savings over manual cluster counting methods.

Acknowledgements

We would like to thank Tim Martin and Kade Casciato for being our subject matter experts and providing invaluable domain knowledge along the way to help build a product that truly can make a difference for vineyard operators across the country. We would also like the thank the Cornell team for their initial research in the space and for providing us support in gathering the data needed for the project.

Citation:

Jaramillo, Jonathan, Justine Vanden Heuvel, and Kirstin H. Petersen. Low-cost, computer vision-based, prebloom cluster count prediction in vineyards. Frontiers in Agronomy 3 (2021): 648080. https://doi.org/10.3389/fagro.2021.648080

More Information

Last updated: April 30, 2025