Deep Learning as a technology has grown leaps and bounds in the last few years. More and more AI solutions use Deep Learning as their foundational technology. Studying this technology, however, presents several challenges. IT professionals from varying backgrounds need a simplified resource to learn the concepts and build models quickly. In this course, instructor Kumaran Ponnambalam provides a simplified path to understand various optimization and tuning options available for deep learning models and shows you how to use these options to improve models. He begins by reviewing Deep Learning, including artificial neural networks and architectures. Next, Kumaran discusses the process of hyper parameter tuning. He examines the building blocks of neural networks and the levers available to tune them. Kumaran offers recommendations and best practices. Then he concludes with an end-to-end tuning example.
-
Introduction
- Optimizing neural networks
- Prerequisites for the course
- Setting up the exercise files
-
Introduction to Deep Learning Optimization
- What is deep learning?
- Review of artificial neural networks
- An ANN model
- Model optimization and tuning
- The dee learning tuning process
- Experiment setups for the course
- Chapter quiz
-
Tuning the Deep Learning Network
- Epoch and batch size tuning
- Epoch and batch size experiment
- Hidden layers tuning
- Determining nodes in a layer
- Choosing activation functions
- Initializing weights
- Chapter quiz
-
Tuning Back Propagation
- Vanishing and exploding gradients
- Batch normalization
- Optimizers
- Optimizer experiment
- Learning rate
- Learning rate experiment
- Chapter quiz
-
Overfitting Management
- Overfitting in ANNs
- Regularization
- Regularization experiment
- Dropouts
- Droput experiment
- Chapter quiz
-
Model Tuning Exercise
- Tuning exercise: Problem statement
- Acquire and process data
- Tuning the network
- Tuning backpropagation
- Avoiding overfitting
- Building the final model
-
Conclusion
- Continuing your deep learning journey