Deep Learning Programming Workshop

What we’ll do

1) The implementing gradient section in the Udacity Lesson 2 Intro to Neural Networks class does not follow Vincent’s description of gradient implementation. We will follow the a3 octave implementation from Coursera Hinton assignment 3 which includes numerical stability and momentum as Vincent describes and port it to python.
2) The 50-70% accuracy objective for hw2(TF CNN implementation) is a poor learning objective. We will Cover the lenet architecture in Keras as an addendum and show how important architecture is in obtaining 98% accuracy. we will show how Keras matches the TF input/output dimensions for CNNs automatically. Most kaggle competitions use Keras or TF layers.
3) Gradients, gradients and more gradients. We will show TF verification of our hand derived loss functions.
4) The VAE explanation in the Udacity videos leave off several important and fundamental details. We will cover probability distributions, Bayesian methods, generative models, and Dr. Laurent Dinh’s excellent(!!!!!) Edward libraries, http://edwardlib.org/tutorials/ and more…
4) We will cover some material from Jeremy Howard’s fantastic DL MOOC. He is probably one of the most hands on talented Data Scientists available and his MOOC contains fantastic content with excellent assignments on which to build a demonstration portfolio.

https://github.com/fastai/courses/tree/master/deeplearning1/nbs


Previous ArticleNext Article