Deep Learning for NLP: Memory-based Networks

placeholder

In the journey to understand deep learning models for natural language processing (NLP) the subsequent iterations are memory-based networks which are much more capable of handling extended context in languages. While basic neural networks are better than machine learning (ML) models they still lack in more significant and large language data problems. In this course you will learn about memory-based networks like gated recurrent unit (GRU) and long short-term memory (LSTM). Explore their architectures variants and where they work and fail for NLP. Then consider their implementations using product classification data and compare different results to understand each architectures effectiveness. Upon completing this course you will have learned the basics of memory-based networks and their implementation in TensorFlow to understand the effect of memory and more extended context for NLP datasets.