Deep Learning and Its Tempestuous (Theoretical and Empirical) Relationship with Optimization Theory’s Gradient Descent

Thursday, July 6, 2017
12:00 pm to 1:00 pm PDT
Dr. James G. Shanahan

This talk will present deep learning and its tempestuous (empirical and theoretical) relationship with optimization theory’s gradient descent. Shakespeare might have structured such a talk as follows and used the lens of reverse mode autodiff to aid with understanding:

  • Act 1 Hack it up
  • Act 2 BackProp: theory to the help
  • Act 3 Layer by layer learning, a medieval pastime
  • Act 4 ReLUs (rectified linear unit), a new frontier
  • Act 5 Skip Connections, the SoTA frontier (LSTMs, ResNet, Highway Nets,  DenseNets)

These five acts will be supported by examples and Jupyter notebooks in Python and TensorFlow. In addition, this talk will show how reverse mode autodiff provides an efficient and effective calculus framework that is transforming how we do machine learning.

Log in to the webinar

  1. If you are a MIDS student, log in with your normal credentials; otherwise, type in your name and enter the room as a guest. There may be a short waiting period; please be patient.

  2. When you get into the room, a popup window will appear with instructions to connect your audio.

    • If you are accessing the session from within the United States, click on the “Dial-Out” option, type in your phone number and choose “Join.” You will receive a call directly from the room's conference line. Answer it to connect to the lecture's audio.

    • If you are accessing the session from outside of the United States, click on the “Dial-In” option. The conference number and participant code will appear so that you can access the audio from your phone or Internet phone service.

To run a compatibility test to ensure that your system is properly configured, please go to http://link.datascience.berkeley.edu/907SFM2620001Lz004c9Q00

Jimi Shanahan has spent the past 25 years developing and researching cutting-edge artificial intelligent systems splitting his time between industry and academia. He has (co) founded several companies including: Church and Duncan Group Inc. (2007), a boutique consultancy in large-scale AI which he runs in San Francisco; RTBFast (2012), a real-time bidding engine infrastructure play for digital advertising systems; and Document Souls (1999), a document-centric anticipatory information system. In 2012 he went in-house as the SVP of data science and chief scientist at NativeX, a mobile ad network that got acquired by MobVista in early 2016. In addition, he has held appointments at AT&T (executive director of research), Turn Inc. (founding chief scientist), Xerox Research, Mitsubishi Research, and at Clairvoyance Corp (a spinoff research lab from CMU). He also advises several high-tech startups (including Quixey, Aylien, ChartBoost, DigitalBank you.co, VoxEdu, and others).

Jimi has been affiliated with the University of California, Berkeley and Santa Cruz since 2008, where he teaches graduate courses on big data analytics, machine learning, deep learning, and stochastic optimization. In addition, he is currently visiting professor of data science at the University of Ghent, Belgium. He has published six books, more than 50 research publications, and over 20 patents in the areas of machine learning and information processing. Jimi received his Ph.D. in engineering mathematics from the University of Bristol, U. K., and holds a Bachelor of Science degree from the University of Limerick, Ireland. He is a EU Marie Curie fellow. In 2011 he was selected as a member of the Silicon Valley 50 (Top 50 Irish Americans in Technology).

Last updated:

August 3, 2017