ML Algorithms: Machine Learning Implementation Using Calculus & Probability
This course explores the use of multivariate calculus derivative function representations differentiation and linear algebra to optimize ML (machine learning) algorithms. In 10 videos learners will observe how to use probability theory to enable prediction and other analytical types in ML including the role of probability in chain rule and Bayes rule. First you will explore the concepts of variance covariance and random vectors before examining Likelihood and Posteriori estimation. Next learn how to use estimation parameters to determine the best value of model parameters through data assimilation and how it can be applied to ML. You will explore the role of calculus in deep learning and the importance of derivatives in deep learning. Continue by learning optimization functions such as gradient descent and whether to increase or decrease weight to maximize or minimize some metrics. You will learn to implement differentiation and integration in R and how to implement calculus derivatives integrals using Python. Finally you will examine the use of limits and series expansion in Python.