In previous articles, we showed how to perform streaming ingestion of tweets into a LeanXcale database, how to perform sentiment analysis on tweets using Python and SQLAlchemy, and how to illustrate real-time geospatial data over a map using LeanXcale GIS capabilities. In this article, I demonstrate how easy it is to interact with LeanXcale leveraging the benefits of Jupyter Notebooks by applying topic modeling to Donald Trump's tweets.
This notebook and all required elements can be downloaded from our git repository at Gitlab.
For those who are not familiar with the tool Jupyter Notebook (formerly IPython Notebook), let me introduce it first. Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text, according to the open-source project’s web page.
The interface allows a programmer to document code, run it, review the outcome, and visualize data within the same environment. This approach is very handy during the prototyping phase of a complete end-to-end data science workflow that includes data cleaning, statistical modeling, building and training machine learning models, and visualizing data.
First, python3, pip, and Jupyter must be installed. To install python3 and pip you can check our previous post on Twitter analysis that explains the installation process). To install Jupyter, you just need to execute in a bash terminal:
pip3 install jupyter
To run the Jupyter Notebook inside a pipenv environment, you can follow this post. To reactivate the virtual environment once deactivated, you'll need to run in a bash terminal within the same folder as the pipenv was initially created:
pipenv shell
For the execution of the notebook, some dependencies must be installed in the Python environment where the notebook is running. For convenience, we recommend executing the notebook inside a virtual environment, for example, pipenv. With this approach, there won't be any conflict between the packages installed and the host Python environment. Inside the pipenv, you can install the packages using:
pip3 install <name_package>.
Also, a requirements.txt file is distributed within the git repository, so all dependencies except the SQLAlchemy LeanXcale python driver can be installed using:
pip3 install -r requirements.txt
In addition, stopwords and wordnet from NLTK must be downloaded:
python3
import nltk
nltk.download('stopwords')
nltk.download('wordnet')
quit()
To initiate the Jupyter Notebook server, run the following command in a bash terminal, and a GUI will be prompted in your browser from where you can navigate and open your notebook.
jupyter notebook
IMPORTANT! To follow through this notebook, you will need the SQLAlchemy python driver from LeanXcale. It can be downloaded from our website and installed following the instruction on our previous post on sentiment analysis.
pip3 install file.whl
As stated above, the goal of this article is to show how the capabilities of Jupyter Notebooks for implementing machine learning tasks can join forces with a LeanXcale database, and its seamless integration with pandas dataframes using SQLAlchemy. First, two options are explained below on how to load a LeanXcale database using pandas dataframes. Then, we use gensim to build the required data structures to use NLP algorithms. Finally, a topic modeling LDA algorithm is applied to Donald Trump’s tweets.
We begin by importing all required dependencies needed for our project.
import warnings
#warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')
import sklearn
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, Float, DateTime, TIMESTAMP, BigInteger
from sqlalchemy.dialects.mysql import BIGINT
from sqlalchemy.orm import sessionmaker
import pandas as pd
import re
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from wordcloud import WordCloud
import numpy as np
import matplotlib.pyplot as plt
from textblob import TextBlob
import tweepy
from datetime import datetime
import os
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly as py
import plotly.graph_objs as go
import gensim
from gensim import corpora, models, similarities
from gensim.models.coherencemodel import CoherenceModel
#import logging
import tempfile
from string import punctuation
from collections import OrderedDict
import seaborn as sns
import pyLDAvis.gensim
import matplotlib.pyplot as plt
%matplotlib inline
import configparser
init_notebook_mode(connected=True) #do not miss this line
warnings.filterwarnings("ignore")
Now, we define several functions that will be helpful:
def insert2LX(tweet_df, user, database, table):
engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database+'?autocommit=False')
tweet_df.to_sql(name=table, con=engine, if_exists='append', index=False)
def insert2LX_2(tweet_df, user, database, table):
engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database+'?autocommit=False')
connection = engine.connect()
transaction = connection.begin()
try:
tweet_df.to_sql(name=table, con=connection, if_exists='append', index=False,
dtype={'ID': BigInteger, 'USERNAME': String, 'CREATED': TIMESTAMP, 'TEST': String, 'RT': BigInteger,
'LAT': Float, 'LON': Float, 'PLACE': String})
transaction.commit()
except:
transaction.rollback()
def getTweets():
config = configparser.ConfigParser()
config.read('twitter4j.properties')
consumer_key = config['twitter']['oauth.consumerKey']
consumer_secret = config['twitter']['oauth.consumerSecret']
access_token = config['twitter']['oauth.accessToken']
access_token_secret = config['twitter']['oauth.accessTokenSecret']
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
tweets = api.user_timeline(screen_name='realDonaldTrump',count=2000)
tweetList = [tweet._json for tweet in tweets]
finalTweetList = []
for tweet in tweetList:
tweet_id = tweet['id_str']
username = tweet['user']['name']
created_at = int(datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S %z %Y').timestamp())*1000
text = tweet['text']
retweet_count = tweet['retweet_count']
finalTweetList.append({'ID': int(tweet_id),
'USERNAME' : str(username),
'CREATED': created_at,
'TEXT': str(text),
'RT': int(retweet_count),
'LAT' : float(0),
'LON' : float(0),
'PLACE' : str(0)
})
tweet_df = pd.DataFrame(finalTweetList, columns = ['ID', 'USERNAME','CREATED','TEXT', 'RT', 'LAT', 'LON', 'PLACE'])
return tweet_df
def create_table(user, database, table):
engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database)
tables = engine.table_names('APP')
metadata = MetaData()
metadata.bind = engine
table = Table(table, metadata,
Column('ID', BigInteger, primary_key=True),
Column('USERNAME', String),
Column('CREATED', TIMESTAMP),
Column('TEXT', String),
Column('RT', Integer),
Column('LAT', Float),
Column('LON', Float),
Column('PLACE', String)
)
metadata.create_all(engine)
def load_tweets():
import ndjson
# load from file-like objects
with open('realdonaldtrump.ndjson') as f:
data = ndjson.load(f)
finalTweetList = []
for tweet in data:
tweet_id = tweet['id_str']
username = tweet['user']['name']
created_at = int(datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S %z %Y').timestamp())*1000
text = tweet['text']
retweet_count = tweet['retweet_count']
finalTweetList.append({'ID': int(tweet_id),
'USERNAME' : str(username),
'CREATED': created_at,
'TEXT': str(text),
'RT': int(retweet_count),
'LAT' : float(0),
'LON' : float(0),
'PLACE' : str(0)
})
tweet_df = pd.DataFrame(finalTweetList, columns = ['ID', 'USERNAME','CREATED','TEXT', 'RT', 'LAT', 'LON', 'PLACE'])
return tweet_df
Now that we defined the logic to load and insert data into our LeanXcale database, we next need to call the appropriate functions to start the loading process. We assume we already have a LeanXcale database ready to go. If you don't have your LeanXcale database ready, you can request a free trial in the cloud by following the link.
tweets = load_tweets()
print('Reading tweets done!')
create_table('APP', 'twitterdb', 'TWEETS')
print('Table created!')
insert2LX(tweets, 'APP', 'twitterdb', 'TWEETS')
print('Tweets inserted!')
def lx_to_df(user, database, table):
engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database)
Session = sessionmaker(bind=engine)
session = Session()
meta = MetaData()
meta.bind = engine
tweetsTable = Table(table, meta, autoload='True')
query_all = session.query(tweetsTable)
df = pd.read_sql(query_all.statement, query_all.session.bind)
print('df loaded from LX DB!')
engine.dispose()
return df
def lx_to_df_2(user, database, SQL):
engine = create_engine('leanxcale://' + user + '@10.64.21.1:1529/' + database)
df = pd.read_sql_query(SQL, engine)
engine.dispose()
return df
We select all data from our table TWEETS with the SQLAlchemy driver, as we did for our previous post on sentiment analysis.
tweets = lx_to_df('APP', 'twitterdb', 'TWEETS')
We can quickly inspect our data by showing the total count of records loaded from the database along with the first five records.
print('Total count of tweets: '+str(len(tweets)))
tweets.head(5)
We can get more insight into the data by plotting the tweet distribution over time with an interactive plotly graph. In the static view, this content may not be displayed.
import plotly
import plotly.graph_objs as go
plotly.offline.init_notebook_mode()
tweets['CREATED'] = pd.to_datetime(tweets['CREATED'], format='%y-%m-%d %H:%M:%S')
tweetsT = tweets['CREATED']
trace = go.Histogram(
x=tweetsT,
marker=dict(
color='blue'
),
opacity=0.75
)
layout = go.Layout(
title='Tweet Activity Over Years',
height=450,
width=1200,
xaxis=dict(
title='Month and year'
),
yaxis=dict(
title='Tweet Quantity'
),
bargap=0.2,
)
data = [trace]
fig = go.Figure(data=data, layout=layout)
iplot(fig)
Up until this point, we have loaded our dataset from a JSON file into our LeanXcale database and then into a pandas dataframe. After an initial inspection to check that the data is fine, we can then start processing the tweet text to perform topic modeling.
First, we create a corpus represented as a list containing the text from all tweets. We can also print the first ten elements of the corpus.
corpus=[]
a=[]
for i in range(len(tweets['TEXT'])):
a=tweets['TEXT'][i]
corpus.append(a)
corpus[0:10]
texts = corpus
TEMP_FOLDER = tempfile.gettempdir()
print('Folder "{}" will be used to save temporary dictionary and corpus.'.format(TEMP_FOLDER))
#logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
A typical pre-processing step is to remove the stopwords from the text corpus. Stopwords are commonly used words that search engines are programmed to ignore. The NLTK library already includes a list of these words, so we will use this to identify and remove them from our dataset. In addition, we add more words to this list to remove unwanted results that we expect are included in our dataset. The knowledge required for identifying these extra words is known as domain knowledge, and in our case, we filter words such as RT, rt, or the username @realDonaldTrump.
After completing this pre-processing, we create a dictionary with unique entries for each word in the corpus. To generate this dictionary, we convert all words to lowercase and remove the stopwords.
# removing common words and tokenizing
list1 = ['RT','rt', '@realDonaldTrump','@realdonaldtrump', '@realdonaldtrump:', '&:','&']
stoplist = stopwords.words('english') + list(punctuation) + list1
texts = [[word for word in str(document).lower().split() if word not in stoplist] for document in corpus]
dictionary = corpora.Dictionary(texts)
dictionary.save(os.path.join(TEMP_FOLDER, 'trump.dict')) # store the dictionary, for future reference
print(texts[5])
#print(dictionary)
#print(dictionary.token2id)
Using our created corpus and dictionary, we can represent the corpus as a bag of words with the doc2bow method from the gensim library. This process returns a sparse vector that indicates as tuples each word’s unique ID (from the dictionary) and the number of occurrences in the current document.
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'trump.mm'), corpus)
The tf-idf model is used in text mining to determine the importance of a word to a document of a corpus. In a nutshell, this model assigns more importance to those words that appear more frequently in a document, but also applies a penalty if a word frequently appears in the corpus. Even though we are computing this using the gensim library, the process for the calculation is the following.
tfidf = models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]
The technique we apply for topic modeling is Latent Dirichlet Allocation (LDA), which is a widely used strategy based on a simple concept: each document can be described by a distribution of topics, and each topic can be described by a distribution of words.
NOTE: If you are interested in understanding the full behavior of LDA, then check out this post.
For those who don't want to go deep into a mathematical discussion, the gensim library comes to your aid with its implementation of the LDA model. First, we define the number of topics, and we will discuss multiple methods for topic evaluation. For now, it is sufficient to only set a number of topics, for example, five.
total_topics = 5
After defining the number of topics, to train an LDA model we only need the previously computed dictionary and a bag of words. We can also experiment with other parameters for tuning the algorithm, but for simplicity we will use default values. Also, we can print the five most important words inside every topic for preview.
lda = models.LdaModel(corpus, id2word=dictionary, iterations = 50, num_topics=total_topics)
corpus_lda = lda[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi
lda.show_topics(total_topics,5)
One aspect of LDA is the static number of topics that must be defined prior to execution. Several measures for topic coherence have been proposed to distinguish between good and bad topics, and gensim offers an implementation to compute the coherence of an LDA model as presented in the paper http://svn.aksw.org/papers/2015/WSDM_Topic_Evaluation/public.pdf.
The following method computes the coherence values for different models with a varying number of topics. Depending on the number of executions, this process may take some time to finish, as it computes a different model for each iteration.
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3):
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = models.LdaModel(corpus, id2word=dictionary, num_topics=num_topics, iterations = 50)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values
model_list, coherence_values = compute_coherence_values(dictionary=dictionary, corpus=corpus, texts=texts, start=1, limit=80, step=5)
We can plot the resulting coherence measurement as a function of the number of topics using an elbow plot to explore the nature of our dataset and determine which is the appropriate number of topics to select. This is the same technique used in clustering algorithms for determining the number of clusters. As we see in the graph, the coherence metric continues to increase with respect to the number of topics, which is quite large. This result could be an indicator than more pre-processing, such as fine-tuning, is required, as our documents, i.e., the tweets, are short in length. One possible strategy is to extend the tweets using word2vec techniques or creating n-grams.
# Show graph
limit=80; start=1; step=5;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()
Finally, we visualize the resulting LDA model to help understand how it has been fitted to a corpus. We can choose the model that offers the best coherence metric and use pyLDAvis to explore the result. In our case, we have not found a single number of topics that is clearly optimal in terms of coherence, so we use 11 topics for the visualization exercise.
NOTE: The pyLDAvis visualization dashboard may take some time to load due to the length of the dictionary. Don't panic. It will come up! Also, in the static view, this content may not be displayed.
'''model_opt = model_list[coherence_values.index(max(coherence_values))]
x_opt = x[coherence_values.index(max(coherence_values))]
model_opt.show_topics(x_opt,5)'''
x_list = list(x)
model_opt = model_list[x_list.index(11)]
model_opt.show_topics(11,5)
pyLDAvis.enable_notebook()
panel = pyLDAvis.gensim.prepare(model_opt, corpus_lda, dictionary, mds='tsne')
pyLDAvis.display(panel)
If you have reached this point, congratulations! We learned how to load a ndjson file format into our LeanXcale database, and create the appropriate table using SQLAlchemy. Also, we reviewed a way for reading tweets using the Twitter tweepy API to extract data from a non-static dataset obtained through the Twitter API. We connected to our LeanXcale database to query the contents of the table created, and applied text processing for topic modeling of our dataset consisting of a collection of tweets from Donald Trump to infer topics from the tweet collection.
If you have any concerns running the example or working with LeanXcale, then you can contact me using the information below.
WRITTEN BY
Jesus Manuel Gallego Romero, Software Engineer at LeanXcale
jesus.gallego@leanxcale.com
https://www.linkedin.com/in/jes%C3%BAs-manuel-gallego-romero-68a430134/