This course will explore what HCI knowledge and methods can bring to the study, design, and evaluation of AI systems with a particular emphasis on the human, social, and ethical impact of those systems. Students will read papers and engage in discussions around the three main components of a human-centered design process as it relates to an AI system:
- needs assessment,
- design and development, and
Following these three main design phases, students will learn what needs assessment might look like for designing AI systems, how those systems might be prototyped, and what HCI methods for real-world evaluation can teach us about evaluating AI systems in their context of use. The course will also discuss challenges that are unique to AI systems, such as understanding and communicating technical capabilities and recognizing and recovering from errors.
Guest lectures will be given by experts in AI ethics (e.g., Timnit Gebru) and fairness, accountability, and transparency in AI systems (e.g., Motahhare Eslami).