Improving Neural Networks: Data Scaling & Regularization
Explore how to create and optimize machine learning neural network models scaling data batch normalization and internal covariate shift. Learners will discover the learning rate adaptation schedule batch normalization and using L1 and L2 regularization to manage overfitting problems. Key concepts covered in this 10-video course include the approach of creating deep learning network models along with steps involved in optimizing networks including deciding size and budget; how to implement the learning rate adaptation schedule in Keras by using SGD and specifying learning rate epoch and decay using Google Colab; and scaling data and the prominent data scaling methods including data normalization and data standardization. Next you will learn the concept of batch normalization and internal covariate shift; how to implement batch normalization using Python and TensorFlow; and the steps to implement L1 and L2 regularization to manage overfitting problems. Finally observe how to implement gradient descent by using Python and the steps related to library import and data creation.