Music Search and Recommendation from Millions of Songs
Advances in music production, distribution, and consumption have made millions of songs available to virtually anyone on the planet, through the Internet. To allow users to retrieve the desired content, algorithms for automatic music indexing and recommendation are a must.
In this talk, I will discuss some aspects of automated music analysis for music search and recommendation: i) automated music tagging for semantic retrieval (e.g., searching for “funky jazz with male vocals”), and ii) a query-by-example paradigm for content-based music recommendation, wherein a user queries the system by providing one or more songs, and the system responds with a list of relevant or similar song recommendations (e.g., playlist generation for online radio). Finally, I will introduce our most recent research on context-aware recommendation, which leverages various sensor' signals in smartphones to infer user context (activity, mood) and provide music recommendations accordingly, without requiring an active user query (zero click).
I will provide both high-level discussion and technical detail. For example, for query-by-example search, collaborative filter techniques perform well when historical data (e.g., user ratings, user playlists, etc.) is available. However, their reliance on historical data impedes performance on novel or unpopular items. To combat this problem, we rely on content-based similarity, which naturally extends to novel items, but is typically out-performed by collaborative filter methods. I will present a method for optimizing content-based similarity by learning from a sample of collaborative filter data. I will show how this algorithm may be adapted to improve recommendations if a variety of information besides musical content is available as well (e.g., music video clips, web documents, lyrics, and/or art work describing musical artists).
Gert Lanckriet is an associate professor of electrical and computer engineering at UC San Diego, where he currently heads the Computer Audition Laboratory (CALab) and leads an interdepartmental group on Computational Statistics and Machine Learning (COSMAL). He researches the interplay between machine learning, applied statistics, and convex optimization, inspired by and with applications to computer audition and music information retrieval.
He was awarded the SIAM Optimization Prize in 2008 and is the recipient of a Hellman Fellowship, an IBM Faculty Award, an NSF CAREER Award, and an Alfred P. Sloan Foundation Research Fellowship. In 2011, MIT Technology Review named him one of the 35 top young technology innovators in the world (TR35). In 2014, he received the Best Ten-Year Paper Award at the International Conference on Machine Learning.
In 2014, he co-founded Benefunder, an innovative organization that works with wealth management firms to connect philanthropists with leading researchers across the nation to fund their research. He received a master’s degree in electrical engineering from the Katholieke Universiteit Leuven, Belgium, and M.S. and Ph.D. degrees in wlectrical wngineering and computer ccience from UC Berkeley.