Self-Supervised Learning for the Real World
Spearheaded by advances in natural language processing, machine learning is undergoing a transformative shift towards large, generalist models trained with self-supervised learning (SSL). In this talk, I'll discuss two challenges lying ahead for this paradigm, as well as some paths towards surmounting them.
First, I'll discuss the problem of task ambiguity. While the space of tasks that models can perform is expanding rapidly, the number of bits used to specify the task is shrinking. Given these two opposing forces, how do we ensure that models learn the tasks we intend? I'll discuss how we can measure the effects of such task ambiguity on humans and language models, as well as work showing how two-way interaction between users and large models can make strides on this problem in NLP and computer vision.
Second, I'll discuss the challenge of domain-agnostic SSL, necessary for realizing the benefits of SSL in high-impact settings such as healthcare, the sciences, and engineering. I'll present DABS, a novel kind of Domain-Agnostic Benchmark for SSL algorithms, covering data from 12 different fields (e.g., text, genomics, wearable sensors, and particle physics). With DABS, we develop and present the first SSL methods which succeed on such a broad range of modalities.
Alex Tamkin is a fifth-year Ph.D. student in computer science at Stanford, advised by Noah Goodman and part of the Stanford NLP Group and the Stanford AI Lab.
His research focuses on large pretrained models (e.g., GPT-3) and how we can better build, understand, and control them. He is especially interested in multimodal and domain-agnostic pretraining, which has the potential to unlock important applications in healthcare, engineering, and the natural sciences.
In the past, he also worked in reinforcement learning, human-robot interaction, and computational astronomy, and spent time at Google Brain, Google Language, and Google Civics. He is a recipient of the Open Philanthropy AI Fellowship.
If you have questions about this event, please contact Ruiqi Zhong at firstname.lastname@example.org.