Data Science 290
Generative AI: Foundations, Techniques, Challenges, and Opportunities
Recent developments in neural network architectures, algorithms, and computing hardware have led to a revolutionary development usually referred to as generative AI nowadays. Large language models (LLMs) are now able to generate seemingly human-like text in response to tasks like summarization, question answering, etc. Leveraging similar strategies, comparable advances have been made with images as well as audio. With today’s (and anticipated future) capabilities, Generative AI is poised to be a tool used comprehensively in a wide variety of ways, and therefore to have a profound set of effects on our lives and society as a whole.
This course is a broad introduction to these new technologies. It is split conceptually into three parts. In the introduction section we will cover the historical aspects, key ideas and learnings all the way to Transformer architectures and training aspects. In the practical aspects and techniques section, we will learn how to deploy, use, and train LLMs. We will discuss core concepts like prompt tuning, quantization, and parameter efficient fine-tuning, and we will also explore use case patterns. Finally, we will discuss challenges & opportunities offered by Generative AI, where we will highlight critical issues like bias and inclusivity, fake information, and safety, as well as some IP issues.
Our focus will be on practical aspects of LLMs to enable students to be both effective and responsible users of generative AI technologies.