Sentiment Analysis with Recurrent Neural Networks in TensorFlow

Recurrent neural networks (RNNs) are ideal for considering sequences of data. In this course, you'll explore how word embeddings are used for sentiment analysis using neural networks.
Course info
Rating
(27)
Level
Intermediate
Updated
Dec 20, 2017
Duration
2h 54m
Table of contents
Description
Course info
Rating
(27)
Level
Intermediate
Updated
Dec 20, 2017
Duration
2h 54m
Description

Sentiment analysis and natural language processing are common problems to solve using machine learning techniques. Having accurate and good answers to questions without trudging through reviews requires the application of deep learning techniques such as neural networks. In this course, Sentiment Analysis with Recurrent Neural Networks in TensorFlow, you'll learn how to utilize recurrent neural networks (RNNs) to classify movie reviews based on sentiment. First, you'll discover how to generate word embeddings using the skip-gram method in the word2vec model, and see how this neural network can be optimized by using a special loss function, the noise contrastive estimator. Next, you'll delve into understanding RNNs and how to implement an RNN to classify movie reviews, and compare and contrast the neural network implementation with a standard machine learning model, the Naive Bayes algorithm. Finally, you'll learn how to implement the same RNN but with pre-built word embeddings. By the end of this course, you'll be able to understand and implement word embedding algorithms to generate numeric representations of text, and know how to build a basic classification model with RNNs using these word embeddings.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Reducing Complexity in Data
Intermediate
3h 20m
Apr 11, 2019
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi, my name is Janani Ravi. Welcome to this course on sentiment analysis using recurrent neural network in TensorFlow. A little about myself. I have a masters degree in electrical engineering from Stanford, and have worked at companies such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on real-time collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loony Corn, a studio for high quality video content. Recurrent neural networks are a versatile and powerful form of neural network, very useful for applications that need to consider context. RNNs are ideal for considering sequences of data, frames in a movie, sentences in a paragraph, or stock returns in a period. In order for RNNs to work on text sequences, we first build word embeddings, a numeric representation of words to feed into a neural network. Generating word embeddings is a compute-heavy operation, which can be optimized by using a special loss function, the noise-contrastive estimator. RNNs are especially well-suited to use in natural language processing applications, and this course uses RNNs to build a complex sentiment classification system. We use a specific RNN architecture known as the LSTM, or the long short-term memory. This architecture overcomes a known problem that RNNs suffer from, instability during optimization, the problem of vanishing and exploding variants.

Implementing Word Embeddings in TensorFlow
Hi, and welcome to this module where we'll do some hands-on coding. We'll implement word embeddings using the Word2Vec model in TensorFlow. We'll see how Word2Vec embeddings can be generated using a simple neural network with just one hidden layer. There are two possible implementations of the Word2Vec model, and both are more or less equal, the Continuous Bag of Words and the skip-gram implementation. We'll use some hands-on coding with the TensorFlow Python APIs in order to implement Word2Vec using the skip-gram model. We'll see that while generating word embeddings, our traditional Softmax prediction layer does not work that well. Instead, we'll use the noise-contrastive estimator. That's a far better loss function than Softmax for Word2Vec. We discussed briefly in the last module that there are two implementations of the Word2Vec model, the Continuous Bag of Words and the skip-gram. The Continuous Bag of Words model involves using the words in the context surrounding the target word in order to predict the target. If the word London occurs in a number of sentences such as these on the left, we'll use the important words in the sentences on the left in order to predict the target London. The skip-gram model is the exact reverse of the Continuous Bag of Words. Here we use the word London in order to predict the words in the context. The word London, fed into our machine-learning model, will predict words like global, metropolis, Trafalgar Square, and so on.