Formal Privacy Methods for Statistical Learning
For firms and organizations that handle personal data, the desire to extract valuable information and insight must be balanced against the privacy interests of individuals. This task has grown considerably harder in the last few decades, with the development of advanced learning algorithms that can leverage statistical patterns to infer personal information. As a result, databases that were recently considered anonymized have been proven vulnerable to attack. Starting with the seminal definition of differential privacy, researchers are now responding with a new generation of algorithmic techniques, based on strong adversary models and offering mathematical bounds on worst-case privacy loss. This course is an introduction to the field known as formal privacy or differential privacy. It includes both foundational theory and algorithmic techniques for building private algorithms. A particular focus is placed on algorithms for statistical learning, and to research that incorporates a statistical perspective.
The first third of the course is structured like a bootcamp, with problem sets to build fluency in the most common mathematical structures used in the field. The latter two-thirds of the course is structured like a research seminar, with student-led discussion of published articles each week. The course completes with a final research project, giving students a chance to develop new algorithms, extend theoretical results, or build systems that incorporate formal privacy guarantees.