Sentiment analysis and natural language processing are common problems to solve using machine learning techniques. Having accurate and good answers to questions without trudging through reviews requires the application of deep learning techniques such as neural networks. In this course, Sentiment Analysis with Recurrent Neural Networks in TensorFlow, you'll learn how to utilize recurrent neural networks (RNNs) to classify movie reviews based on sentiment. First, you'll discover how to generate word embeddings using the skip-gram method in the word2vec model, and see how this neural network can be optimized by using a special loss function, the noise contrastive estimator. Next, you'll delve into understanding RNNs and how to implement an RNN to classify movie reviews, and compare and contrast the neural network implementation with a standard machine learning model, the Naive Bayes algorithm. Finally, you'll learn how to implement the same RNN but with pre-built word embeddings. By the end of this course, you'll be able to understand and implement word embedding algorithms to generate numeric representations of text, and know how to build a basic classification model with RNNs using these word embeddings.
A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.
Course Overview Hi, my name is Janani Ravi. Welcome to this course on sentiment analysis using recurrent neural network in TensorFlow. A little about myself. I have a masters degree in electrical engineering from Stanford, and have worked at companies such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on real-time collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loony Corn, a studio for high quality video content. Recurrent neural networks are a versatile and powerful form of neural network, very useful for applications that need to consider context. RNNs are ideal for considering sequences of data, frames in a movie, sentences in a paragraph, or stock returns in a period. In order for RNNs to work on text sequences, we first build word embeddings, a numeric representation of words to feed into a neural network. Generating word embeddings is a compute-heavy operation, which can be optimized by using a special loss function, the noise-contrastive estimator. RNNs are especially well-suited to use in natural language processing applications, and this course uses RNNs to build a complex sentiment classification system. We use a specific RNN architecture known as the LSTM, or the long short-term memory. This architecture overcomes a known problem that RNNs suffer from, instability during optimization, the problem of vanishing and exploding variants.