LDA on Donald Trump's tweets using LeanXcale and Jupyter Notebook

In previous articles, we showed how to perform streaming ingestion of tweets into a LeanXcale database, how to perform sentiment analysis on tweets using Python and SQLAlchemy, and how to illustrate real-time geospatial data over a map using LeanXcale GIS capabilities. In this article, I demonstrate how easy it is to interact with LeanXcale leveraging the benefits of Jupyter Notebooks by applying topic modeling to Donald Trump's tweets.

This notebook and all required elements can be downloaded from our git repository at Gitlab.

Jupyter Notebooks

For those who are not familiar with the tool Jupyter Notebook (formerly IPython Notebook), let me introduce it first. Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text, according to the open-source project’s web page.

The interface allows a programmer to document code, run it, review the outcome, and visualize data within the same environment. This approach is very handy during the prototyping phase of a complete end-to-end data science workflow that includes data cleaning, statistical modeling, building and training machine learning models, and visualizing data.

Requirements

First, python3, pip, and Jupyter must be installed. To install python3 and pip you can check our previous post on Twitter analysis that explains the installation process). To install Jupyter, you just need to execute in a bash terminal:

pip3 install jupyter

To run the Jupyter Notebook inside a pipenv environment, you can follow this post. To reactivate the virtual environment once deactivated, you'll need to run in a bash terminal within the same folder as the pipenv was initially created:

pipenv shell

For the execution of the notebook, some dependencies must be installed in the Python environment where the notebook is running. For convenience, we recommend executing the notebook inside a virtual environment, for example, pipenv. With this approach, there won't be any conflict between the packages installed and the host Python environment. Inside the pipenv, you can install the packages using:

pip3 install <name_package>.

Also, a requirements.txt file is distributed within the git repository, so all dependencies except the SQLAlchemy LeanXcale python driver can be installed using:

pip3 install -r requirements.txt

In addition, stopwords and wordnet from NLTK must be downloaded:

python3
import nltk
nltk.download('stopwords')
nltk.download('wordnet')
quit()

To initiate the Jupyter Notebook server, run the following command in a bash terminal, and a GUI will be prompted in your browser from where you can navigate and open your notebook.

jupyter notebook

IMPORTANT! To follow through this notebook, you will need the SQLAlchemy python driver from LeanXcale. It can be downloaded from our website and installed following the instruction on our previous post on sentiment analysis.

pip3 install file.whl

Time to start coding!

As stated above, the goal of this article is to show how the capabilities of Jupyter Notebooks for implementing machine learning tasks can join forces with a LeanXcale database, and its seamless integration with pandas dataframes using SQLAlchemy. First, two options are explained below on how to load a LeanXcale database using pandas dataframes. Then, we use gensim to build the required data structures to use NLP algorithms. Finally, a topic modeling LDA algorithm is applied to Donald Trump’s tweets.

We begin by importing all required dependencies needed for our project.

In [2]:
import warnings

#warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')
import sklearn
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, Float, DateTime, TIMESTAMP, BigInteger
from sqlalchemy.dialects.mysql import BIGINT
from sqlalchemy.orm import sessionmaker
import pandas as pd
import re
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from wordcloud import WordCloud
import numpy as np
import matplotlib.pyplot as plt
from textblob import TextBlob
import tweepy
from datetime import datetime

import os
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly as py
import plotly.graph_objs as go
import gensim
from gensim import corpora, models, similarities
from gensim.models.coherencemodel import CoherenceModel
#import logging
import tempfile
from string import punctuation
from collections import OrderedDict
import seaborn as sns
import pyLDAvis.gensim
import matplotlib.pyplot as plt
%matplotlib inline
import configparser
init_notebook_mode(connected=True) #do not miss this line
warnings.filterwarnings("ignore")

Function definition

Now, we define several functions that will be helpful:

  1. insert2LX() uses the pandas dataframe _tosql function to directly save a pandas dataframe to a SQL table. You must set autocommit to false, as we don't want to commit every insert because this would lead to poor performance. In this way, we can insert the 40k tuples during the same transaction, and the _tosql function will perform the commit at the end. The following will be use
In [3]:
def insert2LX(tweet_df, user, database, table):
    engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database+'?autocommit=False')
    tweet_df.to_sql(name=table, con=engine, if_exists='append', index=False)
  1. insert2LX2() is an alternative approach for inserting a pandas dataframe into an SQL table by managing the transactions ourselves instead of letting pandas do it for us. A typical use case is when the number of tuples in the dataframe is too large, so we could split the loading phase into several transactions and commit for portions of the tuples. Also, if a conflict arises, then it rollbacks the transaction.
In [4]:
def insert2LX_2(tweet_df, user, database, table):
    engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database+'?autocommit=False')
    connection = engine.connect()
    transaction = connection.begin()
    try:

        tweet_df.to_sql(name=table, con=connection, if_exists='append', index=False,
                    dtype={'ID': BigInteger, 'USERNAME': String, 'CREATED': TIMESTAMP, 'TEST': String, 'RT': BigInteger,
                           'LAT': Float, 'LON': Float, 'PLACE': String})
        transaction.commit()
    except:
        transaction.rollback()
  1. getTweets() retrieves tweets from the timeline of a specific user through the Twitter API from the Tweepy library. Due to its limitation of 200 tweets per user, this library is not used in this example. For completeness, it is here in case you want to play around with it.
In [5]:
def getTweets():
    config = configparser.ConfigParser()
    config.read('twitter4j.properties')
    consumer_key = config['twitter']['oauth.consumerKey']
    consumer_secret = config['twitter']['oauth.consumerSecret']
    access_token = config['twitter']['oauth.accessToken']
    access_token_secret = config['twitter']['oauth.accessTokenSecret']
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)
    tweets = api.user_timeline(screen_name='realDonaldTrump',count=2000)

    tweetList = [tweet._json for tweet in tweets]
    finalTweetList = []
    for tweet in tweetList:
        tweet_id = tweet['id_str']
        username = tweet['user']['name']
        created_at = int(datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S %z %Y').timestamp())*1000
        text = tweet['text']
        retweet_count = tweet['retweet_count']
        finalTweetList.append({'ID': int(tweet_id),
                                 'USERNAME' : str(username),
                                 'CREATED': created_at,
                                 'TEXT': str(text),
                                 'RT': int(retweet_count),
                                 'LAT' : float(0),
                                 'LON' : float(0),
                                 'PLACE' : str(0)
                                })
    tweet_df = pd.DataFrame(finalTweetList, columns = ['ID', 'USERNAME','CREATED','TEXT', 'RT', 'LAT', 'LON', 'PLACE'])
    return tweet_df
  1. createtable() is an example of how to create a table with the required schema (the same as used in previous posts related to Twitter analysis) using the SQLAlchemy ORM functionality. Using the metadata object, we create a table object composed of column objects.
In [6]:
def create_table(user, database, table):
    engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database)
    tables = engine.table_names('APP')
    metadata = MetaData()
    metadata.bind = engine

    table = Table(table, metadata,
                  Column('ID', BigInteger, primary_key=True),
                  Column('USERNAME', String),
                  Column('CREATED', TIMESTAMP),
                  Column('TEXT', String),
                  Column('RT', Integer),
                  Column('LAT', Float),
                  Column('LON', Float),
                  Column('PLACE', String)
                 )
    metadata.create_all(engine)
  1. _loadtweets() imports tweets from a ndjson (newline delimited JSON) that contains all Donald Trump's tweets. This function reads the json and builds the dataframe in the same format as the SQL table previously created.
In [7]:
def load_tweets():
    import ndjson
    # load from file-like objects
    with open('realdonaldtrump.ndjson') as f:
        data = ndjson.load(f)
    finalTweetList = []
    for tweet in data:
        tweet_id = tweet['id_str']
        username = tweet['user']['name']
        created_at = int(datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S %z %Y').timestamp())*1000
        text = tweet['text']
        retweet_count = tweet['retweet_count']
        finalTweetList.append({'ID': int(tweet_id),
                                 'USERNAME' : str(username),
                                 'CREATED': created_at,
                                 'TEXT': str(text),
                                 'RT': int(retweet_count),
                                 'LAT' : float(0),
                                 'LON' : float(0),
                                 'PLACE' : str(0)
                                })
    tweet_df = pd.DataFrame(finalTweetList, columns = ['ID', 'USERNAME','CREATED','TEXT', 'RT', 'LAT', 'LON', 'PLACE'])
    return tweet_df

Hammer time!

Now that we defined the logic to load and insert data into our LeanXcale database, we next need to call the appropriate functions to start the loading process. We assume we already have a LeanXcale database ready to go. If you don't have your LeanXcale database ready, you can request a free trial in the cloud by following the link.

In [8]:
tweets = load_tweets()
print('Reading tweets done!')
create_table('APP', 'twitterdb', 'TWEETS')
print('Table created!')
insert2LX(tweets, 'APP', 'twitterdb', 'TWEETS')
print('Tweets inserted!')
Reading tweets done!
got an empty frame, but the statement is not done yet
Table created!
Tweets inserted!
In [11]:
def lx_to_df(user, database, table):
    engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database)
    Session = sessionmaker(bind=engine)
    session = Session()
    meta = MetaData()
    meta.bind = engine
    tweetsTable = Table(table, meta, autoload='True')
    query_all = session.query(tweetsTable)
    df = pd.read_sql(query_all.statement, query_all.session.bind)
    print('df loaded from LX DB!')
    engine.dispose()
    return df
In [12]:
def lx_to_df_2(user, database, SQL):
    engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database)
    df = pd.read_sql_query(SQL, engine)
    engine.dispose()
    return df

We select all data from our table TWEETS with the SQLAlchemy driver, as we did for our previous post on sentiment analysis.

In [13]:
tweets = lx_to_df('APP', 'twitterdb', 'TWEETS')
df loaded from LX DB!

We can quickly inspect our data by showing the total count of records loaded from the database along with the first five records.

In [14]:
print('Total count of tweets: '+str(len(tweets)))
tweets.head(5)
Total count of tweets: 40241
Out[14]:
ID USERNAME CREATED TEXT RT LAT LON PLACE
0 1698308935 Donald J. Trump 2009-05-04 18:54:25 Be sure to tune in and watch Donald Trump on L... 501 0.0 0.0 0
1 1701461182 Donald J. Trump 2009-05-05 01:00:10 Donald Trump will be appearing on The View tom... 33 0.0 0.0 0
2 1737479987 Donald J. Trump 2009-05-08 13:38:08 Donald Trump reads Top Ten Financial Tips on L... 13 0.0 0.0 0
3 1741160716 Donald J. Trump 2009-05-08 20:40:15 New Blog Post: Celebrity Apprentice Finale and... 12 0.0 0.0 0
4 1773561338 Donald J. Trump 2009-05-12 14:07:28 "My persona will never be that of a wallflower... 1422 0.0 0.0 0

We can get more insight into the data by plotting the tweet distribution over time with an interactive plotly graph. In the static view, this content may not be displayed.

In [15]:
import plotly
import plotly.graph_objs as go
plotly.offline.init_notebook_mode()

tweets['CREATED'] = pd.to_datetime(tweets['CREATED'], format='%y-%m-%d %H:%M:%S')
tweetsT = tweets['CREATED']

trace = go.Histogram(
    x=tweetsT,
    marker=dict(
        color='blue'
    ),
    opacity=0.75
)

layout = go.Layout(
    title='Tweet Activity Over Years',
    height=450,
    width=1200,
    xaxis=dict(
        title='Month and year'
    ),
    yaxis=dict(
        title='Tweet Quantity'
    ),
    bargap=0.2,
)

data = [trace]

fig = go.Figure(data=data, layout=layout)
iplot(fig)

Let's get our hands dirty

Up until this point, we have loaded our dataset from a JSON file into our LeanXcale database and then into a pandas dataframe. After an initial inspection to check that the data is fine, we can then start processing the tweet text to perform topic modeling.

Corpus creation

First, we create a corpus represented as a list containing the text from all tweets. We can also print the first ten elements of the corpus.

In [16]:
corpus=[]
a=[]
for i in range(len(tweets['TEXT'])):
        a=tweets['TEXT'][i]
        corpus.append(a)

corpus[0:10]
texts = corpus
In [17]:
TEMP_FOLDER = tempfile.gettempdir()
print('Folder "{}" will be used to save temporary dictionary and corpus.'.format(TEMP_FOLDER))

#logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Folder "/tmp" will be used to save temporary dictionary and corpus.

Stopwords cleaning and dictionary creation

A typical pre-processing step is to remove the stopwords from the text corpus. Stopwords are commonly used words that search engines are programmed to ignore. The NLTK library already includes a list of these words, so we will use this to identify and remove them from our dataset. In addition, we add more words to this list to remove unwanted results that we expect are included in our dataset. The knowledge required for identifying these extra words is known as domain knowledge, and in our case, we filter words such as RT, rt, or the username @realDonaldTrump.

After completing this pre-processing, we create a dictionary with unique entries for each word in the corpus. To generate this dictionary, we convert all words to lowercase and remove the stopwords.

In [18]:
# removing common words and tokenizing
list1 = ['RT','rt', '@realDonaldTrump','@realdonaldtrump', '@realdonaldtrump:', '&amp:','&amp;']
stoplist = stopwords.words('english') + list(punctuation) + list1

texts = [[word for word in str(document).lower().split() if word not in stoplist] for document in corpus]

dictionary = corpora.Dictionary(texts)
dictionary.save(os.path.join(TEMP_FOLDER, 'trump.dict'))  # store the dictionary, for future reference
print(texts[5])
#print(dictionary)
#print(dictionary.token2id)
['miss', 'usa', 'tara', 'conner', 'fired', '"i\'ve', 'always', 'believer', 'second', 'chances."', 'says', 'donald', 'trump']

Bag of words (BoW) representation

Using our created corpus and dictionary, we can represent the corpus as a bag of words with the doc2bow method from the gensim library. This process returns a sparse vector that indicates as tuples each word’s unique ID (from the dictionary) and the number of occurrences in the current document.

In [19]:
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'trump.mm'), corpus)

Term frequency - Inverse document frequency

The tf-idf model is used in text mining to determine the importance of a word to a document of a corpus. In a nutshell, this model assigns more importance to those words that appear more frequently in a document, but also applies a penalty if a word frequently appears in the corpus. Even though we are computing this using the gensim library, the process for the calculation is the following.

  1. TF: Measures the frequency a word appears in a document with respect to the document's length.
$$TF(t) = (Number_of_times_term_t_appears_in_a_document) / (Total_number_of_terms_in_the_document)$$

  1. IDF: Measures how important the word is within the context of the corpus.
$$IDF(t) = log_e(Total_number_of_documents / Number_of_documents_with_term_t_in_it)$$
In [20]:
tfidf = models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]

Latent Dirichlet Allocation

The technique we apply for topic modeling is Latent Dirichlet Allocation (LDA), which is a widely used strategy based on a simple concept: each document can be described by a distribution of topics, and each topic can be described by a distribution of words.

NOTE: If you are interested in understanding the full behavior of LDA, then check out this post.

For those who don't want to go deep into a mathematical discussion, the gensim library comes to your aid with its implementation of the LDA model. First, we define the number of topics, and we will discuss multiple methods for topic evaluation. For now, it is sufficient to only set a number of topics, for example, five.

In [21]:
total_topics = 5

After defining the number of topics, to train an LDA model we only need the previously computed dictionary and a bag of words. We can also experiment with other parameters for tuning the algorithm, but for simplicity we will use default values. Also, we can print the five most important words inside every topic for preview.

In [22]:
lda = models.LdaModel(corpus, id2word=dictionary, iterations = 50, num_topics=total_topics)
corpus_lda = lda[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi
In [23]:
lda.show_topics(total_topics,5)
Out[23]:
[(0,
  '0.012*"president" + 0.011*"democrats" + 0.008*"people" + 0.007*"united" + 0.006*"hillary"'),
 (1,
  '0.030*"great" + 0.009*"trump" + 0.009*"america" + 0.008*"make" + 0.007*"again!"'),
 (2,
  '0.031*"thank" + 0.017*"great" + 0.013*"trump" + 0.011*"new" + 0.011*"border"'),
 (3,
  '0.016*"great" + 0.013*"news" + 0.012*"fake" + 0.006*"#trump2016" + 0.006*"crooked"'),
 (4,
  '0.014*"media" + 0.009*"north" + 0.008*"you!" + 0.008*"southern" + 0.006*"happy"')]

Topic evaluation

One aspect of LDA is the static number of topics that must be defined prior to execution. Several measures for topic coherence have been proposed to distinguish between good and bad topics, and gensim offers an implementation to compute the coherence of an LDA model as presented in the paper http://svn.aksw.org/papers/2015/WSDM_Topic_Evaluation/public.pdf.

The following method computes the coherence values for different models with a varying number of topics. Depending on the number of executions, this process may take some time to finish, as it computes a different model for each iteration.

In [24]:
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3):

    coherence_values = []
    model_list = []
    for num_topics in range(start, limit, step):
        model = models.LdaModel(corpus, id2word=dictionary, num_topics=num_topics, iterations = 50)
        model_list.append(model)
        coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
        coherence_values.append(coherencemodel.get_coherence())

    return model_list, coherence_values
In [25]:
model_list, coherence_values = compute_coherence_values(dictionary=dictionary, corpus=corpus, texts=texts, start=1, limit=80, step=5)

We can plot the resulting coherence measurement as a function of the number of topics using an elbow plot to explore the nature of our dataset and determine which is the appropriate number of topics to select. This is the same technique used in clustering algorithms for determining the number of clusters. As we see in the graph, the coherence metric continues to increase with respect to the number of topics, which is quite large. This result could be an indicator than more pre-processing, such as fine-tuning, is required, as our documents, i.e., the tweets, are short in length. One possible strategy is to extend the tweets using word2vec techniques or creating n-grams.

In [26]:
# Show graph
limit=80; start=1; step=5;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()

Visualization tool

Finally, we visualize the resulting LDA model to help understand how it has been fitted to a corpus. We can choose the model that offers the best coherence metric and use pyLDAvis to explore the result. In our case, we have not found a single number of topics that is clearly optimal in terms of coherence, so we use 11 topics for the visualization exercise.

NOTE: The pyLDAvis visualization dashboard may take some time to load due to the length of the dictionary. Don't panic. It will come up! Also, in the static view, this content may not be displayed.

In [27]:
'''model_opt = model_list[coherence_values.index(max(coherence_values))]
x_opt = x[coherence_values.index(max(coherence_values))]
model_opt.show_topics(x_opt,5)'''
x_list = list(x)
model_opt = model_list[x_list.index(11)]
model_opt.show_topics(11,5)
Out[27]:
[(0,
  '0.027*"hillary" + 0.022*"great" + 0.014*"join" + 0.014*"good" + 0.011*"news"'),
 (1,
  '0.022*"tonight" + 0.014*"fbi" + 0.012*"bill" + 0.011*"live" + 0.010*"p.m."'),
 (2,
  '0.031*"united" + 0.023*"states" + 0.019*"republican" + 0.017*"north" + 0.013*"great"'),
 (3,
  '0.017*"president" + 0.014*"get" + 0.012*"great" + 0.011*"country" + 0.010*"big"'),
 (4,
  '0.028*"fake" + 0.023*"media" + 0.017*"news" + 0.013*"never" + 0.012*"the…"'),
 (5,
  '0.093*"trump" + 0.024*"donald" + 0.021*"national" + 0.020*"southern" + 0.015*"via"'),
 (6,
  '0.036*"great" + 0.030*"new" + 0.015*"since" + 0.014*"honor" + 0.013*"big"'),
 (7,
  '0.086*"thank" + 0.030*"border" + 0.023*"house" + 0.019*"white" + 0.010*"a…"'),
 (8,
  '0.043*"great" + 0.033*"america" + 0.023*"again!" + 0.021*"wall" + 0.018*"make"'),
 (9,
  '0.034*"great" + 0.017*"you!" + 0.016*"#trump2016" + 0.015*"vote" + 0.015*"meeting"'),
 (10,
  '0.053*"democrats" + 0.025*"@foxnews" + 0.014*"senate" + 0.011*"incredible" + 0.009*"state"')]
In [28]:
pyLDAvis.enable_notebook()
panel = pyLDAvis.gensim.prepare(model_opt, corpus_lda, dictionary, mds='tsne')
pyLDAvis.display(panel)
Out[28]:

Conclusion

If you have reached this point, congratulations! We learned how to load a ndjson file format into our LeanXcale database, and create the appropriate table using SQLAlchemy. Also, we reviewed a way for reading tweets using the Twitter tweepy API to extract data from a non-static dataset obtained through the Twitter API. We connected to our LeanXcale database to query the contents of the table created, and applied text processing for topic modeling of our dataset consisting of a collection of tweets from Donald Trump to infer topics from the tweet collection.

If you have any concerns running the example or working with LeanXcale, then you can contact me using the information below.

WRITTEN BY

Jesus Manuel Gallego Romero, Software Engineer at LeanXcale

jesus.gallego@leanxcale.com

https://www.linkedin.com/in/jes%C3%BAs-manuel-gallego-romero-68a430134/

In [ ]: