Convo Nets for Visual Recognition: Computer Vision & CNN Architectures

placeholder

Learners can explore the machine learning concept and classification of activation functions the limitations of Tanh and the limitations of Sigmoid and how these limitations can be resolved using the rectified linear unit or ReLU along with the significant benefits afforded by ReLU in this 10-video course. You will observe how to implement ReLU activation function in convolutional networks using Python. Next discover the core tasks used in implementing computer vision and developing CNN models from scratch for object image classification by using Python and Keras. Examine the concept of the fully-connected layer and its role in convolutional networks and also the CNN training process workflow and essential elements that you need to specify during the CNN training process. The final tutorial in this course involves listing and comparing the various convolutional neural network architectures. In the concluding exercise you will recall the benefits of applying ReLU in CNNs list the prominent CNN architectures and implement ReLU function in convolutional networks using Python.