Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Natural Language Processing - Topic Identification

May 10, 2019 • 13 Minute Read

Introduction

Natural Language Processing (or NLP) is the science of dealing with human language or text data. One of the NLP applications is Topic Identification, which is a technique used to discover topics across text documents.

In this guide, we will learn about the fundamentals of topic identification and modeling. Using the bag-of-words approach and simple NLP models, we will learn how to identify topics from texts.

We will start by importing the libraries we will be using in this guide.

Importing the Required Libraries and Modules

      import nltk
from nltk.tokenize import word_tokenize
from collections import Counter
nltk.download('wordnet')      #download if using this module for the first time


from nltk.stem import WordNetLemmatizer 
from nltk.corpus import stopwords
nltk.download('stopwords')    #download if using this module for the first time


#For Gensim
import gensim
import string
from gensim import corpora
from gensim.corpora.dictionary import Dictionary
from nltk.tokenize import word_tokenize
    

Bag-of-words Approach

Bag-of-words is a simplistic method for identifying topics in a document. It works on the assumption that the higher the frequency of the term, the higher it's importance. We will see how to implement this using the text example given below:

      text1 = "Avengers: Infinity War was a 2018 American superhero film based on the Marvel Comics superhero team the Avengers. It is the 19th film in the Marvel Cinematic Universe (MCU). The running time of the movie was 149 minutes and the box office collection was around 2 billion dollars. (Source: Wikipedia)"
print(text1)
    

Output:

      Avengers: Infinity War was a 2018 American superhero film based on the Marvel Comics superhero team the Avengers. It is the 19th film in the Marvel Cinematic Universe (MCU). The running time of the movie was 149 minutes and the box office collection was around 2 billion dollars. (Source: Wikipedia)
    

The text is on the Avengers movie, 'Infinity War'. To begin with, we will create tokens using tokenization. The first line of code below splits the text into tokens. The second line converts the tokens to lowercase and the third line prints the output.

      tokens = word_tokenize(text1)
lowercase_tokens = [t.lower() for t in tokens]
print(lowercase_tokens)
    

Output:

      'avengers'
    

The list of tokens generated above can be passed as an initialization argument for the 'Counter' class, which has already been imported at the beginning from the library module 'collections'.

The first line of code below creates a counter object, 'bagofwords_1', that allows us to see each token and the frequency. The second line prints the most common 10 tokens along with the frequency.

      bagofwords_1 = Counter(lowercase_tokens)
print(bagofwords_1.most_common(10))
    

Output:

      ('the'
    

Text Preprocessing

The output generated above is interesting but not useful from topic identification purpose. This is because tokens like 'the' and 'was' are common words and do not help much in identifying the topics. To overcome this, we will do text preprocessing.

The first line of code below creates a list called 'alphabets' that loops over 'lowercase_tokens' and retains only alphabetical characters. The second and third lines remove the English stopwords, and the fourth line prints the new list called 'stopwords_removed'.

      alphabets = [t for t in lowercase_tokens if t.isalpha()]

words = stopwords.words("english")
stopwords_removed = [t for t in alphabets if t not in words]

print(stopwords_removed)
    

Output:

      'avengers'
    

We have completed the initial text preprocessing steps, but more can still be done. One such important technique is Word Lemmatization, which is the process of shortening words to their roots or stems. This is done in the code below.

The first line of code instantiates the WordNetLemmatizer. The second line uses the '.lemmatize()' method to create a new list called lem_tokens, while the third line calls in the Counter class and creates a new Counter called bag_words. Finally, the fourth line prints the six most common tokens.

      lemmatizer = WordNetLemmatizer()

lem_tokens = [lemmatizer.lemmatize(t) for t in stopwords_removed]

bag_words = Counter(lem_tokens)
print(bag_words.most_common(6))
    

Output:

      ('avenger'
    

The above output is far more useful. We don't have stopwords like 'the' and 'was', and by looking at the new set of common words, we can easily identify that the topic of our text is Avengers.

We have seen how bag-of-words can be used after preprocessing to identify topics in a corpus. We will now learn about another powerful NLP library called 'genism' for topic modeling.

Using Gensim and Latent Dirichlet Allocation (LDA)

Gensim is an open source NLP library which can be used for creating and querying a corpus. It works by building word embeddings or vectors which are then used to perform topic modeling.

Word vectors are multi-dimensional mathematical representations of words created using deep learning methods. They give us insight into relationships between terms in a corpus. For example, the distance between the two words 'India' and 'New Delhi' might be similar to the distance between 'China' and 'Beijing', as these represent the 'Country-Capital' vectors.

To get started, we have created nine sample documents taken from the Pluralsight website. These are represented as sample1 to sample9 in the lines of code below. Finally, we have created a collection of these documents in the last line of code.

      sample1 = "Our board of directors boasts 11 seasoned technology and business leaders from Adobe, GSK, HGGC and more."
sample2 = "Our executives lead by example and guide us to accomplish great things every day."
sample3 = "Working at Pluralisght means being surrounded by smart, passionate people who inspire us to do our best work."
sample4 = "A leadership team with vision."
sample5 = "Courses on cloud, microservices, machine learning, security, Agile and more."
sample6 = "Interactive courses and projects."
sample7 = "Personalized course recommendations from Iris."
sample8 = "We’re excited to announce that Pluralsight has ranked #9 on the Great Place to Work 2018, Best Medium Workplaces list!"
sample9 = "Few of the job opportunities include Implementation Consultant - Analytics, Manager - assessment production, Chief Information Officer, Director of Communications."

# compile documents
compileddoc = [sample1, sample2, sample3, sample4, sample5, sample6, sample7, sample8, sample9]
    

Let us examine the first document which can be done by the code below.

      print(compileddoc[0])
    

Output:

      Our board of directors boasts 11 seasoned technology and business leaders from Adobe, GSK, HGGC and more.
    

In subsequent sections of this guide, we will try to perform topic modeling on the corpus 'compileddoc'. As always, the first step is text preprocessing.

The first three lines of code below set the basic framework for cleaning the document. In the fourth to eight lines, we define a function for cleaning the document. Finally, in the last line of code, we use the function to create the cleaned document called 'final_doc'.

      stopwords = set(stopwords.words('english'))
exclude = set(string.punctuation)
lemma = WordNetLemmatizer()

def clean(document):
    stopwordremoval = " ".join([i for i in document.lower().split() if i not in stopwords])
    punctuationremoval = ''.join(ch for ch in stopwordremoval if ch not in exclude)
    normalized = " ".join(lemma.lemmatize(word) for word in punctuationremoval.split())
    return normalized

final_doc = [clean(document).split() for document in compileddoc]
    

Let us now look at the first document - pre and post text cleaning - with the following code.

      print("Before text-cleaning:", compileddoc[0]) 

print("After text-cleaning:",final_doc[0])
    

Output:

      Before text-cleaning: Our board of directors boasts 11 seasoned technology and business leaders from Adobe, GSK, HGGC and more.
After text-cleaning: ['board', 'director', 'boast', '11', 'seasoned', 'technology', 'business', 'leader', 'adobe', 'gsk', 'hggc', 'more']
    

We are now ready to carry out topic modeling on the 'final_doc' corpus, using a powerful statistical method called Latent Dirichlet Allocation (LDA). LDA uses a generative approach to find texts that are similar. It is not a classification technique and does not require labels to infer the patterns. Instead, the algorithm is more of an unsupervised method that uses a probabilistic model to identify groups of topics.

Preparing Document-Term Matrix for LDA

The first step is to convert the corpus into a matrix representation, as done in the following code.

The first line of code creates the term dictionary of the corpus, where every unique term is assigned an index. The second line converts the corpus into a Document-Term Matrix using dictionary prepared above. Finally, with the document-term matrix ready, we create the object for the LDA model in the third line of code.

      dictionary = corpora.Dictionary(final_doc)

DT_matrix = [dictionary.doc2bow(doc) for doc in final_doc]

Lda_object = gensim.models.ldamodel.LdaModel
    

After creating the LDA model object, we will train it on the document-term matrix. The first line of code below performs this task by passing the LDA object on the 'DT_matrix'. We also need to specify the number of topics and the dictionary. Since we have a small corpus of nine documents, we can limit the number of topics to two or three.

In the lines of code below, we have set the number of topics as 2. The second line prints the result.

      lda_model_1 = Lda_object(DT_matrix, num_topics=2, id2word = dictionary)

print(lda_model_1.print_topics(num_topics=2, num_words=5))
    

Output:

      (0
    

In the output above, each line represents a topic with individual topic terms and term-weights. Topic1 seems to be more about the 'courses' offered by Pluralisght, while the second topic seems to indicate about 'work'.

We can also change the number of topics and see how it changes the output. In the following code, we have selected three topics.

      lda_model_2 = Lda_object(DT_matrix, num_topics=3, id2word = dictionary)

print(lda_model_2.print_topics(num_topics=3, num_words=5))
    

Output:

      (0
    

The result is almost the same, with Topic2 indicating 'courses', while Topics 1 and 3 seem to resemble 'work'.

Conclusion

In this guide, you have learned about topic identification using the bag-of-words technique. You also got an introduction on LDA using a powerful open source NLP library 'gensim'.

The performance of topic models is dependent on the terms present in the corpus, represented as document-term-matrix. Since this matrix is sparse in nature, reducing the dimensionality may improve the model performance. However, since our corpus was not very large, we can be reasonably confident with the achieved results.

To learn more about Natural Language Processing, please refer to the following guides:

  1. [Natural Language Processing – Text Parsing] (/guides/text-parsing)

  2. [Natural Language Processing - Machine Learning with Text Data] (/guides/nlp-machine-learning-text-data)