Linear Models & Gradient Descent: Managing Linear Models
Explore the concept of machine learning linear models classifications of linear models and prominent statistical approaches used to implement linear models. This 11-video course also explores the concepts of bias variance and regularization. Key concepts covered here include learning about linear models and various classifications used in predictive analytics; learning different statistical approaches that are used to implement linear models [single regression multiple regression and analysis of variance (ANOVA)]; and various essential components of a generalized linear model (random component linear predictor and link function). Next discover differences between the ANOVA and analysis of covariance (ANCOVA) approaches of statistical testing; learn about implementation of linear regression models by using Scikit-learn; and learn about the concepts of bias variance and regularization and their usages in evaluating predictive models. Learners explore the concept of ensemble techniques and illustrate how bagging and boosting algorithms are used to manage predictions and learn to implement bagging algorithms with the approach of random forest by using Scikit-learn. Finally observe how to implement boosting ensemble algorithms by using Adaboost classifier in Python.