Setting up your ML application
- Train/dev/test sets
- Bias/Variance
- Basic “recipe” for machine learning
Regularizing your neural network
- Regularization
- Why regularization reduces overfitting
- Dropout regularization
- Understanding dropout
- Other regularization methods
Setting up your optimization problem
- Normalizing inputs
- Vanishing/exploding gradients
- Numerical approximation of gradients
- Gradient Checking
- Gradient Checking implementation notes
Optimization Algorithms
- Mini-batch gradient descent
- Understanding mini-batch gradient descent
- Exponentially weighted averages
- Understanding exponentially weighted averages
- Bias correction in exponentially weighted average
- Gradient descent with momentum
- RMSprop
- Adam optimization algorithm
- Learning rate decay
- The problem of local optima
Hyperparameter tuning
- Tuning process
- Using an appropriate scale to pick hyperparameters
- Hyperparameters tuning in practice: Pandas vs. Caviar
Batch Normalization
- Normalizing activations in a network
- Fitting Batch Norm into a neural network
- Why does Batch Norm work?
- Batch Norm at test time
Multi-class classification
- Softmax regression
- Trying a softmax classifier
- Deep Learning frameworks
- TensorFlow