Data Warehousing and Data Science

4 July 2021

What is Convolutional Neural Network (CNN)?

Filed under: Machine Learning — Vincent Rainardi @ 7:52 am
Tags:

In the previous article I explained what convolution was (link). We use convolution in image classification/recognition, to power a special type of neural network called CNN (Convolutional Neural Network). In this article I’ll explain what CNN is and how we use it for image classification.

What is CNN?

CNN is a neural network consisting of convolutional layers and pooling layers like this:

In the above architecture, the CNN classifies images into 10 different classes.

  • The dimension of the images is 200 x 200 pixel, in colour i.e. 3 layers (RGB), and the output is 10 classes.
  • The convolutional layers extract the features such as detecting edges (see my last article here). We set the dimension of the first convoltional layer to be the same as the image, i.e. 200 x 200. The 32 is the number of features we are extracting, usually we start with 32 or 64 then doubling on the next group. When an image passes a convolutional layer the dimension we try not to change the dimension by using “padding”.
  • The pooling layers summarise the features, either by taking the average or the maximum. When it passes a pooling layer, the dimension is typically reduced by half.
  • The flatten layers change the shape into 1 dimension so we can do normal neural network operations (configuring the weights). In the above example the last pooling layer is 25 x 25 x 128. When this is flatten the 1 dimensional shape is 25 x 25 x 128 = 1 x 160,000.
  • The fully connected layers (also known as dense layers) are fully connected layers of neurons (multilayer perceptron = MLP). We tend to set the number of neuron of the first fully connected layer to 4x or 8x of the last pooling layer. The second layer can be reduced to half of the first layer, e.g. in the above example the first layer is 1024 (8x of 128) and the second layer is 512 (half of 1024).

Python Code

So it is about composing many layers of neural network one by one. The tool that most people use for CNN nowadays is Keras (a library of Tensorflow). CNN requires a lot of computing resources, i.e. GPU, disk and memory, which is why usually we use Google Colab or Kaggle. Both of them provides GPU environment, which can speed up our code 10x (link) or even 100x. A CNN epoch which took 15-20 minutes in my CPU laptop, only took 3-4 seconds in Colab. (an epoch is a run)

As for disk space, CNN can take a lot of disk space. For CNN we need to do image augmentation, meaning we randomly rotate the source images so that the model is not overfitting. Not only rotating, but also flipping, zooming, shifting, changing the brightness, etc. all can be done very simply in Keras (link). The rotated/shifted/flipped images can take a lot of disk space, and for that we can use Google Drive. Luckily in Keras we can generate the augmented images when we fit the model, without having to generate them and store them on the disk! (link)

First we use image_data_set_from_directory to load the images from a directory into a TF dataset:

training_dataset = tf.keras.preprocessing.image_dataset_from_directory(data_directory, validation_split=0.25, subset="training", seed=100, image_size=(200,200), batch_size=30)

Note: the output classes are automatically generated from the sub directory names and they are stored in the TF dataset as “class_names”.

Then we build the CNN model like this:

# Data augmentation: rotation, contrast, flip, zoom and divide by 255 to standardize the input
augment = keras.Sequential( [layers.experimental.preprocessing.RandomFlip(mode="horizontal_and_vertical",input_shape=(200,200,3),
                             layers.experimental.preprocessing.RandomRotation((0.1, 1.3), fill_mode='reflect'), 
                             layers.experimental.preprocessing.RandomZoom(height_factor=(-0.15, 0.15), fill_mode='reflect')])
                             layers.experimental.preprocessing.RandomContrast(0.1)
                          ),                           

model = Sequential([augment, layers.experimental.preprocessing.Rescaling(1./255, input_shape = (200, 200, 3))])

# Two 32 convolution layers with batch normalisation, then max pooling with dropout
model.add(Conv2D(32, (3,3), padding="same", activation="relu"))
model.add(BatchNormalization())
model.add(Conv2D(32, (3,3), padding="same", activation="relu"))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))

# Two 64 convolution layers with BN, then max pooling with dropout
model.add(Conv2D(64, (5,5), padding="same", activation="relu"))
model.add(BatchNormalization())
model.add(Conv2D(64, (5,5), padding="same", activation="relu"))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))

# Flatten then 3 fully connected layers with dropout
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(64, activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(10, activation="softmax"))

For a complete notebook on CIFAR 10 dataset (images with 10 classes) please refer to Jason Brownlee’s article (link) and Abhijeet Kumar’s article (link). The arguments for Random Flip/Rotation/Zoom, etc. are at Keras documentation here: link.

As we notice above there are a few things that we need to set when doing CNN:

  1. Number of filters for Conv2D: for simple images such as MNIST start with 8 or 16, doubling on the next group. For complex images such as CIFAR start with 32 or 64, doubling on the next group. Sometimes doubling doesn’t increase the accuracy, in this case keep it the same on the next group, or even decrease. For instance: 32, 32, 16, 16, 128, 64, 10 rather than 32, 32, 64, 64, 128, 64, 10.
  2. Number of Conv2D layers in each group: in the above example I use 2 layers. Sometimes it is enough to use 1 layer as using 2 layers doesn’t increase the accuracy. For instance: 32, 64, 128, 64, 10 rather than 32, 32, 64, 64, 128, 64, 10. Or two layers on the first group but one layer on the second group, like this: 32, 32, 64, 128, 64, 10.
  3. Number of nodes on the dense layer: in the above example I use 128 and 64 (the last layer is dictated by the output classes, for example: for CIFAR 10 dataset has 10 classes so the last layer is 10). For simple images like MNIST or even complex images like CIFAR 10 sometimes we don’t need to use >100 for the first layer, but 16 or even 8 is enough. So we can use 8, 8, 10 instead of 128, 64, 10. Try with 8 first see if the accuracy is better, then increase it to 16, 32, etc. Of course it doesn’t have to be a power of 2. It can be 10, 25, 50, etc.
  4. Number of dense layers: in the above example I used two layers, i.e. 128 and 64. Depending on the data, sometimes we need more than 2 layers or we could only need 1 layer. So try with 1 and 3 layers, see if the accuracy increases. For classification we rarely need more than 3 layers (with the right number of neurons).
  5. Pooling layer: in the above example I use max pooling with 2 stride and 2 padding (2,2). This is commonly used (in VGGNet for example), effectively reducing the data shape by half on each dimension. But we should try (3,3) and (4,4) as well, if the accuracy is not dropping than the bigger one is better because the training performance would be better, the model would be simpler and smaller chance of the model overfitting the data.
  6. With all the above we need to be careful with overfitting. For example, the accuracy on the training data can be 90% but the accuracy of the validation data is 50%. To avoid this we need to use dropout, i.e. removing some connections between layers. In the above example I use dropout of 20%. We need to try 30% and 50% as well. If the accuracy (of the validation data) is not dropping then the higher the dropout the better it is because the training performance would be better, the model would be simpler and smaller chance of the model overfitting the data. We add the dropout after the pooling layer, not after the convolution layer.
  7. And finally after every convolution layer we need to add batch normalisation (BN), to make the back propagation faster to converge (this is true for all deep neural network, called internal covariate shift, link). Sometimes if we use BN we don’t need to use dropout (using both can make it worse: link, or make it better: link), so check the accuracy if the dropout is removed. If the accuracy stays the same, remove the dropout.

Below are the results of trying different number of filters and layers (assume filter size is constant at 3×3), both on the conv layers and dense layers, as well as trying different batch normalisation and dropout.

Legend:

  • C: means convolutional layer. The numbers represent the filters in that layer, for example “C: 16 16 32 32” means four convolutional layers, 16 filters on the first and second layers, 32 filters on the third and fourth layers.
  • D: means dense layer (fully connected layer), which is after the flatten layer. The numbers represent the nodes in the layer. For example “D: 16” means after the flatten layer there is one dense layer with 16 nodes (neurons).
  • Lower case n following a convolutional layer means batch normalisation. For example: “C: 16 16n 32 32n” means there are 4 convolutional layers, and after the second and fourth layers there is a batch normalisation layer.
  • Lower case d means dropout layer. The number after d is the dropout rate, i.e. ½ mean 50%, ¼ means 25%. For example: “C: 16n 16n d¼ 32n 32n d¼” means four convolutional layers all with batch normalisation, with 25% dropout after the second and forth layers.
  • Upper case R on the dense layer means ReLU activation layer.
  • L2 on the dense layer means the kernel regularizer is using L2 regularization penalty, with L2 factor kept default at 0.01.
  • The yellow numbers in circle are the model numbers.

In the above case we should choose model #26 because the training accuracy reached 90% at epoch 10 (93% at epoch 20), and the validation accuracy reached 51% at epoch 13 (50% at epoch 20). The goal here is for both the training accuracy and validation accuracy to be as high as possible, using the minimum number of filters and layers.

We also look for stability, for example: model #27 had a big validation drop on epoch 18, we want to avoid things like that. We want to validation accuracy to be stable, because if it fluctuates a lot it could unexpectedly drop when we run it for 50 epoch.

In terms of resources, what we are looking for is actually not the number of filters and layers, but the number of parameters. The “model summary” for number #26 looks like below. It displays the number of parameters for each layer.

As we can see above, the dense 16 layer (second line from bottom) has 946,704 parameters. This is because all 16 nodes in the dense layer are connected to the previous layer, which the flattened layer with 59,168 nodes. So the number of weights = 59,168 x 16 = 946,688. Plus 16 biases for each node in the dense layer = 946,704.

Whereas for number #28 the “model summary” looks like this:

We can see that the first dense 8 layer (third line from bottom) has 473,352 parameters. This is because it has 8 nodes and those nodes are connected to the previous layer which is a flattened layer with 59,168 nodes. So the number of weights = 59,168 x 8 = 473,344. Plus 8 biases for each node in the dense layer = 473,352.

Comparing the total number of parameters, model #26 has 963k parameters whereas model #29 has 490k parameters, only half of model #26. Because of this model #28 is a strong contender to model #26. Yes model #28 has more validation fluctuation, but it has half the number of parameters.

We can see that the validation accuracy is still low. In part 2 of this article (link) I’m going to address that issue.

As we can see above, configuring CNN is more of an art than science. But after a few projects we should get some understanding about how each hyperparameter influences the result. Happy learning!

1 July 2021

What is Convolution?

Filed under: Machine Learning — Vincent Rainardi @ 7:32 am
Tags:

For me image classification/recognition is one of the most exciting topics in machine learning (ML). Today all good image classifications are using neural network. For image classification we use a specific type of neural network called Convolutional Neural Network (CNN).

I have heard this term so many times in the last 3 years but I never understood what convolution mean. So in this article I would like to explain what convolution means.

Image Classification

Since 2015 ML is better than human when classifying images. A lot faster no doubt, but also more accurate. Here are a few ML algorithms which made historical landmark in the ImageNet image classification competition (source: Gordon Cooper, Semiconductor Engineering, link)

The competition is about classifying 1.2m training images into 1000 categories (link). All the dark purple deep learning architectures above are convolutional neural networks (CNN). Over the years, the number of layers gradually increases as the available computing power increases.

  • AlexNet started the deep learning revolution in 2012 by using CNN and graphics processing unit (GPU), achieving massive improvement to the previous year result (link).
  • ResNet (stands for Residual Neural Network) can bypass 2-3 layers if those layers are not useful (link). This concept was inspired by the pyramid cells in the celebral cortex.
  • SENet (stands for Squeeze and Excitation Network) can adaptively recalibrate channel-wise feature responses (link).

The application is massive and live changing, from detecting cancer in medical images to self driving cars, from product search to face recognition (link).

What is Convolution?

Convolution is a mathematical operation between two functions, as follows: reverse and shift one function, then take the product of both functions, then take the integral (link).

But in image processing, convolution is a process of applying a filter on an image. This is because::

  1. Mathematically speaking, a “convolution” in the time domain becomes a “multiplication” in the frequency domain (link).
  2. Applying a filter on an image is multiplication process.

Let’s go through some examples so point 2 above becomes clear.

If we have this image and this filter, this is the convolution:

We get the yellow 5 on the convolution by multiplying the yellow area on the image by the filter:

So we multiply green cell on the image with the green cell on the filter (1×1), the blue cell on the image with blue cell on the image (0x1), etc. and then add them up to get 5 on the convolution:

Similarly to get the yellow 4 on the convolution, we multiply the yellow area on the image by the filter:

Why do convolution?

The purpose of doing convolution is to detect a pattern on the image.

If we want to detect if there is a horisontal line on the image then we apply this filter:

If we want to detect if there is vertical line on the image then we apply this filter:

And if we want to detect if there is a diagonal line on the image then we apply this filter:

This is called “feature extraction”. We use convolution to detect “lines” on the image.

The same 3 filters above not only detect “lines” on the image but they also detect “area”. It is probably easier to see if we don’t have the numbers on the cell, see below right:

Note: we call this line and area as “edge”, meaning the border of an area. The 3 filters above detect “edges”.

So next time people say Convolutional Neural Network, you know what Convolution means 🙂

In this article (link) I explain what CNN is.

8 June 2021

Which machine learning algorithms should I use?

Filed under: Machine Learning — Vincent Rainardi @ 5:06 am
Tags:

Every month I learn a new machine learning algorithm. Until today I’ve learned about ten algorithms and whenever I’m trying to solve a machine learning problem, the question is always “Which algorithm should I use?”

Almost every machine learning practitioner knows that the answer depends on supervised or unsupervised, then classification or regression. So which algorithm to use is quite straight forward right?

Well, no. Take classification for example. We can use Logistic Regression, Naive Bayes, Support Vector Machine, Decision Tree, Random Forest, Gradient Boosting or Neural Network. Which one should we use?

“Well, it depends” is the answer we often hear. “Depends on what?” that is the question! It would be helpful if we know what factors to consider right?

So in this article I would like to try answering those questions. I’m going to first address the general question on “Which machine learning algorithm should I use?”  This is useful when you are new in machine learning and never heard about classification and regression, let alone ensemble and boosting. There are many good articles already written about this, so I’m going to point you to them.

Then as an example I’m going to dive specifically into classification algorithms. I’ll try to give a brief outline on what factors we need to consider when deciding, such as linearity, interpretability, multiclass and accuracy. Also the strengths and weaknesses of each algorithm.

General guide on which ML algorithms to use

I would recommend that you start with Hui Li’s diagram: link. She categorised ML algorithms into 4: clustering, regression, classification and dimensionality reduction:

It is very easy to follow, and it is detail enough. She wrote it in 2017 but by and large it is still relevant today.

The second one that I’d recommend is Microsoft’s guide: link, which is newer (2019) and more comprehensive. They categorise ML algorithms into 8: clustering, regression, classification (2 class and multiclass), text analytics, image classification, recommenders, and anomaly detection:

So now you know roughly which algorithm to use for each case, using the combination of Hui Li’s and Microsoft’s diagrams. In addition to that, it would be helpful if you read Danny Varghese’s article about comparative study on machine learning algorithms: link. For every algorithm Danny outlines the advantages and disadvantages against other algorithms in the same category. So once you choose an algorithm based on Hui Li’s and Microsoft’s diagrams, check that algorithm against the alternatives on Danny’s list, make sure that the advantages outweigh the disadvantages.

Classification algorithms: which one should I use?

For classification we can use Logistic Regression, Naive Bayes, Support Vector Machine, Decision Tree, Random Forest, Gradient Boosting Machine (GBM), Perceptron, Linear Discriminant Analysis (LDA), K Nearest Neighbours (KNN), Learning Vector Quantisation (LVQ) or Neural Network. What factors do we need to consider when deciding? And what are the strength and weaknesses of each algorithm?

The factors we need to consider are: linearity, interpretability and multiclass.

The first consideration is linearity of the data. The data is linear if the plot between the predictor and the target variable is separable by a straight line, like below.

Note that the plots above are over-simplified as the reality is not only 2 dimensions but many dimensions (e.g. we have have 8 predictors, or 8 X axis) so the separator is not a line but a hyperplane.

  1. If the data is linear, we can use (link): Logistic Regression, Naive Bayes, Support Vector Machine, Perceptron, Linear Discriminant Analysis.
  2. If the data is not linear, we can use (link): Decision Tree, Random Forest, Gradient Boosting Machine, K Nearest Neighbours, Neural Network, Support Vector Machine using Kernel, Learning Vector Quantisation.

Can we use algorithms in #2 for linear classification? Yes we can, but #1 is more suitable.

Can we use #1 for non-linear classification? No we can’t, not without modification. But there are ways to transform data from a non-linear space to a linear space. They are called “kernel trick”, see my article here: link.

The second factor that we need to consider is interpretability, i.e. the ability to explain why a data point is classified into a certain class. Christoph Molnar explains interpretability in great details: link.

  • If we need to be able to explain, we can use Logistic Regression, Naive Bayes, Decision Tree, Linear Support Vector Machine,
  • If we don’t need to be able to explain, we can use Random Forest, Support Vector Machine with Kernel (see Hugo Dolan’s article: link), Gradient Boosting Machine, K Nearest Neighbours, Neural Network, Perceptron, Linear Discriminant Analysis, Learning Vector Quantisation

The third factor that we need to consider whether we are classifying into two classes (binary classification) or more than two classes (multi-class). Support Vector Machine (SVM), Linear Discriminant Analysis (LDA) and Perceptron are binary classification, but everything else can be used for both binary and multi-class. We can make LDA multi-class, see here: link. Ditto SVM: link.

1. Logistic Regression

Strengths: good accuracy on small amount of data,easy tointerpret (we get feature importance), easy to implement, efficient to train (doesn’t need high compute power), can do multi-class.

Weaknesses: tend to overfit on high dimensions (use regularisation), can’t do non-linear classification (or complex relationship), not good with multicollinearity, sensitive to outliers, requires linear relationship between log odds and target variable.

2. Naive Bayes

Strengths: good accuracy on small amount of data, efficient to train (doesn’t need high compute power), easy to implement, highly scalable, can do multi-class, can do continuous and discreet data, not sensitive to irrelevant features.

Weaknesses: features must be are independent, a category which exist in test dataset but not in training data set will get zero probability (zero frequency problem)

3. Decision Tree

Strengths: easy to interpret (intuitive, show interaction between variables), can classify non-linear data, data doesn’t need to be normalised nor scaled, not affected by missing values, not affected by outliers, performs well with unbalanced data (the nature of data distribution does not matter), can do both classification and regression, can do both numerical and categorical data, provide feature importance (calculated from the decrease in node impurity), good with large dataset, able to handle multicollinearity.

Weaknesses: has tendency tooverfit (bias towards training set, requires pruning), not robust (high variance, small change in training data results in major change in the model and output),not good with continuous variable, requires longer time to train the model (resource intensive).

4. Random Forest

Strengths: high accuracy, doesn’t need pruning, no overfitting, low bias with quite low/moderate variance (because of bootstrapping), can do both classification and regression, can do numerical and categorical, can classify non-linear data, data doesn’t need to be normalised nor scaled, not affected by missing values, not affected by outliers, performs well with unbalanced data (the nature of data distribution does not matter), can be parallelised (can use multiple CPUs in parallel), good with high dimensionality.

Weaknesses: long training time, requires large memory, non interpretable (because there are hundreds of trees).

5. Support Vector Machine (Linear Vanilla)

Strengths: scales well with high dimensional data, stable (low variance), less risk of overftting, doesn’t rely on the entire data (not affected by missing values), works well with noise.

Weaknesses: long training time for large data, requires features scaling.

6. Support Vector Machine (with Kernel)

Strengths: scales well with high dimensional data, stable (low variance), handle non-linear data very well, less risk of overftting (because of regularisation), good with outliers (has gamma and C to control), can detect outliers in anomaly detection, works well with noise.

Weaknesses: long training time for large data, tricky to find appropriate kernel, need large memory, requires features scaling, difficult to interpret.

7. Gradient Boosting

Strengths: high accuracy, flexible with various loss functions, minimal pre-processing, not affected by missing values, works well with unbalanced data, can do both classification and regression.

Weaknesses: tendency to overfit (because it continues to minimise errors), sensitive with outliers, large memory requirement (thousands of trees), long training time, large grid search for hyperparameter, not good with noise, difficult to interpret.

7. K Nearest Neighbours

Strengths: simple to understand (intuitive), simple to implement (both binary and multi-class), handles non-linear data well, non parametric (no requirements on data distribution), respond quickly to data changes in real time implementation, can do both classification and regression.

Weaknesses: long training time, doesn’t work well with high dimensional data, requires scaling, doesn’t work well with imbalanced data, sensitive to outliers and noise, affected by missing values.

8. Neural Network

Strengths: high accuracy,handles non-linear data well, generalise well on unseen data (low variance), non parametric (no requirements on data distribution), works with heteroskedastic data (non-constant variance), works with highly volatile data (time series), works with incomplete data (not affected by missing values), fault tolerance

Weaknesses: requires large amount of data,computationally expensive (requires parallel processors/GPU and large memory), not interpretable, tricky to get the architecture right (#layers, #neurons, functions, etc.)

5 June 2021

The Trick in Understanding Human Language

Filed under: Machine Learning — Vincent Rainardi @ 10:08 am
Tags:

I started learning Natural Language Processing (NLP) with such enthusiasm. There were 3 stages in NLP. The first stage is lexical analysis where the root words and phrases are identified, dealing with stop words and misspelling. The second stage is syntactic analysis where the nouns, verbs, etc. are identified and the grammar is analysed. The third stage is semantic analysis which is about understanding the meanings of the words.

So I thought, this is amazing! I knew computers now understand human languages, for example Alexa and chatbots. And I would be diving into that wonderful world, learning how it’s done. At the end of this process I would be able to create a chatbot that could understand human language. Cool!

I did build a chatbot that could “understand” human language, but disappointingly it doesn’t really understand it. A chatbot uses a “trick” to guess the meaning of our sentences, identifying the most probable intention. It outputs prepared responses and we do need to define which response for which input. So no it does not understand human language in the way I initially thought. We are still far away from having clever robots like in “I Robot” and “Ex Machina”.

In this article I’m writing that learning experience, hoping that it would enlighten those who have not entered the NLP world.

Lexical Analysis

Lexical analysis is about identifying words and phrases, and dealing with stop words and misspelling. I learned how to identify the base form of words, such as “play” in the word “playing” and “player”. This process is called stemming, where we identify rules such as removing “ing” and “ion” suffixes. For this we use regular expression.

The base form of “best” is “good”, which can’t be identified using stemming. For this we use lemmatisation, which is done using a combination of lookups and rules. Both are widely implemented using NLTK in Python, see Ivo’s Bernardo’s article on stemming (link) and Selva Prabhakaran’s article on lemmatisation (link).

But before that we will need to break text into paragraphs, sentences and words. This is called tokenisation. We deal with “she’d” and “didn’t” which are actually “she would” and “did not”. We deal with tokens which are not words, like dates, time e.g. “3:15”, symbols, email address, numbers, years, brackets. See my article on tokenisation here: link.

Then we need to deal with misspelling and for this we need to know how similar two words are using edit distance. Edit distance is the number of operations (like delete a letter, insert a letter, etc.) required to change one word into another.

A crude way of representing a text is using “bag of words”. First we remove the stop words such as the, in, a, is, etc. because stop words exist in every text so they don’t provide useful information. Then we construct a dictionary from the distinct list of words in the text. For every sentence we mark each word whether it exist in the dictionary or not. The result is that a sentence is now converted to a series of 1 and 0. A more sophisticated version uses the word frequency instead of 1 and 0, see: link. Either way in the end the sentences are converted into numbers.

Once a document is converted into numbers, we can run machine learning algorithms on it such as classification. For example we can classify whether an email / text is a spam or not.

That is, in 1 minute, Lexical Analysis 🙂 We can (crudely) represent a document as numbers and use this numerical representation to classify documents. But at this stage the machine doesn’t understand the documents, at all.

Syntactic Analysis

Syntactic Analysis is about breaking (or parsing) a sentence into phrases such as noun phrase, verb phrase, etc. and recognising them. We do this is because the meaning of a word (e.g. “play”) depends on whether it is a noun or a verb. This “noun”, “verb”, “adjective”, “preposition”, etc. are called “part of speech” or POS for short. So the first step is to identify the POS tag for each word.

There are many different approaches for doing POS tagging: supervised, unsupervised, rule based, stochastic, Conditional Random Fields, Hidden Markov Model, memory based learning, etc. Fahim Muhammad Hassan cataloged them in his thesis: link.

  • The Hidden Markov Model (HMM) is arguably the most popular, where the POS tag is determined based not only on the word, but also the POS tag of the previous word. Many have written about HMM and its implementation in Python. For introduction I recommend Raymond Kwok’s article (link) and for a formal lecture Ramon van Handel from Princeton University (link).
  • The best approach in terms of accuracy is the Recurrent Neural Network (RNN). RNN uses deep learning approach where the feedback is fed to the next stage. The most popular implementations are LTSM and GRU. Tanya Dayanand wrote a good short explanation here: link (notebook here).

Once we know the POS tags for each word, we can now parse or break a sentence into phrases (e.g. noun phrase, verb phrase, etc.) or into subject, modifier, object, etc. in order to understand them. The former is called constituency grammar and the latter is called dependency grammar.

  • Constituency grammar: the most popular method is Context Free Grammar (CFG, link), which specifies the rules of how words are grouped into phrases. For example, a noun phrase (e.g. “the sun”) may consist of a determinant (the) and a noun (sun). A sentence can consist of a noun phrase and a verb phrase, e.g. the sun shines brightly.
  • In dependency grammar we first identify the root verb, followed by the subject and object of that verb. Then the modifiers which are an adjective, noun or preposition that modifies the subject or the object. The most popular framework is the Universal Dependency (link). Two of the most popular Universal Dependency English parser is from Georgetown University (link) and Stamford (link).

In addition to parsing sentences into phrases, we need to identify named entities such as city name, person name, etc. In general this subject is called Information Extraction (link) covering the whole pipeline from pre-processing, entity recognition, relation recognition, record linkage and knowledge generation. Recognising named entities is vital for chatbots in order to understand the intention. There are many approaches in Named Entity Recognition (NER) such as Naive Bayes, Decision Trees and Conditional Random Field (see Sidharth Macherla’s article: link). There are many good libraries that we can use, such as NLTK, spaCy and Stanford. Susan Li wrote a good article on NER implementation: link.

That is syntactic analysis in 2 minutes, in which we break sentenses into phrases and words, and recognising each word as verb, noun, etc. or a named entity. At this stage the machine still doesn’t understand the meaning of the sentence!

Semantic Analysis – The Trick

Now that we have parsed sentenses into words and identified the named entities, the final step is to understand those words. This is the biggest learning point for me in NLP. Machines don’t understand the text word by word like human do, but by converting each word into numerical representation (called vectors) and then extract the topic. The topic is the centre of those word vectors.

And that is the big “trick” in NLP. We can do all the lexical analysis and syntactic analysis all we want, but in the end we need to convert the words into vectors, and the centre of those vectors is meaning of those words (the topic). So the meaning is also a vector!

In the real world the vector representations have hundred of dimensions and in the diagrams below I only use 2 dimensions so they are massively over simplified but I hope they can convey my point across. In diagram A we have a sentence “Running is a sport”. Each word is a blue circle and the centre (the centroid) is the solid blue circle. The vector representing this centre is the blue arrow. This blue arrow is the “meaning” which is just a bunch of numbers that make up that vectors (in reality it’s hundred of numbers).

In diagram B we have another sentence “He walks as an exercise”. Each word is a brown circle and the centre is the solid brown circle. The vector representing this centre is the brown arrow. That brown arrow is the “meaning” of that sentence. So the meaning is just a bunch of numbers that make up that brown arrow.

In diagram C we superimpose diagram A and diagram B, and in diagram D we remove the word vectors, leaving just the 2 meaning vectors. Now we can find out how close the meanings of the 2 sentences are, just by looking at how close these 2 vectors are.

Remember that in reality it’s not 2 dimensions but hundreds of dimensions. But you can clearly see the mechanism here. We convert sentences into numbers (vectors) and we compare the numbers. So the computer still don’t understand the sentences, but it can compare sentences.

Say we have a collection of sentences about cooking. We can represent each of these sentences as numbers/vectors. See the left diagram below. The blue circles are the sentences and the solid blue circle is the centre.

If we have a collection sentences about banking, we can do the same. Ditto with holiday. Now we have 3 blue dots (or 3 blue arrows), each representing different topic. One for cooking, one for banking, one for holiday. See the right diagram above.

Now if we have an input, like “I went to Paris and saw Eiffle tower”, the NLP will be able to determine whether this input is about holiday, cooking or banking, without even knowing what a holiday, cooking or banking are! Without even knowing what Eiffle tower and Paris are. Or even what “went” and “saw” are. All it knows is that the vector for “I went to Paris and saw Eiffle tower” is closer to the holiday vector than to the cooking vector or the banking vector. Very smart!

That is the trick in understanding human languages. Convert the sentences into numbers and compare them!

Semantic Analysis – The Steps

Semantic means meaning. Semantic Analysis is about understanding the meaning. Now that we have an idea of how semantic analysis trick is done, let’s understand the steps.

First we convert the words into vectors. There are 2 approaches for doing this:

  1. Frequency based
  2. Prediction based

In the frequency based approach the basic assumption is: words which are used and occur in the same context (e.g. a document or a sentence) tend to have similar meanings. This principle is called Distributional Semantics (link). First we create a matrix containing the word counts per document i.e. the occurance frequency of each word. This matrix is called Occurance Matrix, the rows are the words and the columns are the documents. Then we reduce the number of rows in this matrix using Singular Value Decomposition (link). Each row in this final matrix is the word vector, it represents how that word is distributed in various document. That is the vector for that word. Examples of frequency based approach are: Latent Semantic Analysis (link) and Explicit Semantic Analysis (link).

The prediction based approach uses neural network to learn how words are related to each other. The input of the neural network is the word, represented as a one-hot vector (link), which means that all numbers are zero except one. There is only 1 hidden layer in the neural network, with hundreds of neuron. The output of the neural network is the context words, i.e. the words closest to the input words. For example: if the input word is “car”, the outputs are like below left, with the vector representing the word “car” on the right (source: link).

Examples of the prediction approach are Word2Vec from Google (link), GloVe from Stanford (link) and fastText from Facebook (link).

Once the words become vectors, we use cosine similarity (link) to find out how close the vectors are to each other. And that is how computers “understand” human language, i.e. by converting them into vectors and comparing them with other vectors.

Chatbot

I’m going to end this artcle with chatbot. A chatbot is a conversational engine/bot, which we can use to order tickets, book a hotel, talk to customer service, etc. We can build a chatbot using Rasa (link), IBM Watson (link), Amazon Lex (link) or Google Chat (link).

A chatbot has 2 components:

  • Natural Language Processing (NLP)
  • Dialogue Management

The Natural Language Processing part does Named Entity Recognition (NER) and intention classification. The NER part identifies named entities such as city name, person name, etc. The intention classification part detects what is the intention in the input sentence. For example for a hotel booking chatbot the intention can be greeting, finding hotels, specify location, specify price range, make a booking, etc.

The Dialogue Management part determine what is the response and next step for each intent. For example, if the intention is greeting the response is saying “Hi how can I help you” then wait for an input. If the intension is “finding hotels” the response is asking “In which location” then wait for an input.

And that’s it. The intention classification uses the “trick” I explained in this article to understand human language. It converts the sentence into vector and compare it with the list of intentions (which have been converted into vectors too). That’s how a chatbot “understand” what we are typing. And then it uses a series of “if-then-else” to output the correct response for each intension. Easy isn’t it?

No, from my experience it’s not easy. We need to prepare lots of examples to train the NLP. For each intention we need to supply many examples. For each location we need to specify the other possible names. For example: Madras for Chennai, Bengaluru for Bengalore and Delhi for New Delhi. And we need to provide a list of cities that we are operating in. And we need to cover so many possible dialogue flows in the conversation. And then we need to run it sooo many times over and over again (and it could take 15-30 minutes per run!), each time correcting the mistakes.

It was very time consuming but it was fun and very illuminating. Now I understand what’s going on behind the scene when I’m talking to a chatbot on the internet, or talking to Alex in my kitchen.

31 May 2021

Why Linear Regression is so hard

Filed under: Machine Learning — Vincent Rainardi @ 8:36 am
Tags:

2 years ago I thought linear regression was the easiest algorithm. But it turns out that it is quite difficult to do, because the X and the Y must have a linear relationship, and the errors must be normally distributed, independent and have equal variance. That kind of data in reality is much more unlikely to happen in nature than I initially thought. And if these 4 criteria are not satisfied, we can’t use linear regression. In addition we also face multicollinearity, overfitting and extrapolation when doing linear regression. In this article I would like to explain these issues, and how to solve them.

Criteria 1. X and Y must have a linear relationship

The first issue is the relationship between the X (independent variables) and the Y (the predicted variable) might not be linear. For example, below is a classic case of a “lower tail” where below x1 the data is lower than the linear values (the red points).

Criteria 2. Error terms must be distributed normally

The second issue is that the errors might not be distributed normally. Below left is an example where the error terms are distributed normally. Error terms are the difference between the actual values and the predicted values, aka the residuals. Remember that normally distributed means that 1 standard deviation must cover 68.2% of the data and 2 SD 95.4% and 2 SD 99.7%. Secondly the centre must be 0. Note: The image on the right is from Wikipedia (link).

Below are 3 examples where the error terms are not distributed normally:

On the left the distribution is almost flat. In the middle, the centre is 2 not 0. On the right, the red bars are too low so that the 3 SD is lower than 99.7%. Unless the error terms are distributed normally, we cannot use the linear regression model that we created.

Criteria 3. Error terms must be independent

Error terms must be independent of what? Independent of three things:

  1. of the independent variables (the X1, X2, etc)
  2. of the predicted variable (the Y)
  3. of the previous error terms (see: Robert Nau’s explanation here)

See below for 3 illustrations where the error terms are not independent:

  • Left image: the error terms are correlated to one of the independent variables. In this example the higher the X the lower the error terms.
  • Middle image: the error terms are correlated to the predicted variable. In this example the higher the Y the higher the error terms.
    Note: in linear regression “the predicted variable” can means two things: the actual values and the predicted values. In the context of error terms independence the convention is the predicted values (y hat) because that is what the model represents and we want to know if we can use the model or not. Saying that, the plot would be similar if we use the actual values rather than the predicted values, because error terms are the difference between the actual values and the predicted values.
  • Right image: the error terms are correlated to the previous value of the error terms. This one is also called autocorrelation or serial correlation; it usually happens on time series data.

The reason why we cannot use the model if the error terms is not independent is because the model is bias and therefore not accurate. For example on the left and middle plots above we can see that the error term (which is the difference between the the actual value and the predicted value, which reflects the model’s accuracy) changes depending on the independent variable and dependent variable.

Independent error terms means that the error terms are randomly scattered around 0 (with regards to the predicted values), like this:

Notice that this chart is between the error terms and the predicted values (y hat), not the actual values.
There are 3 things that we should check on the above scattered chart:

  1. That positive and negative error terms are roughly distributed equally. Meaning that the number of data points above and below the x axis are roughly equal.
  2. That there are no outliers. Meaning that there are no data points which are far away from everything else. For example: all data points are within -2 to +2 range but there is a data point at +4.
  3. Most of the error terms are around zero. Meaning that the further away we move vertically from the x axis, the less crowded the data points are. This is to satisfy the “error terms should be distributed normally” criteria which is centered on zero.

Criteria 4. Error terms must have equal variance

It means that the data points are scattered equally around zero, no matter what the predicted values are. In the image below the error terms are not the same across the predicted values (Y hat). Around Y hat = a the error terms have a small variance, at Y hat = b the error terms have a large variance and at Y hat = c the error terms have a small variance.

What should we do?

If the X-Y plot or the residual plot indicates that there is a non-linear relationship in the data (i.e. the 4 points above), there are four things we can do:

  1. We can transform the independent variables or the predicted variable.
  2. We can use polynomial regression
  3. We can do non-linear regression
  4. We can do segmented regression

The first thing is transforming one or more of the independent variables (X) into ln(X), e^X, e^-X, square root of X, etc:

  • First, we need to find out which independent variable is not linear. This is done by plotting each independent variable against the predicted variable (one by one).
  • Then we choose a suitable transformation based on the chart from the first step above, for example: (graphs from fooplot.com)
  • Then we transform the non-linear independent variable, for example we transform X to ln(X), and we use this ln(X) as the independent variable in the linear regression.

The second one is using polynomial regression instead of linear regression, like this:

We can read about polynomial regression in Wikipedia (link) and in Towards Data Science (link, by Animesh Agarwal). As we can read in Animesh’ article, the degree of the polynomial that we choose affects the overfitting, so it’s a trade off between the bias and variance.

The third one that we can do is non-linear regression. By non-linear I mean the model parameters/coefficients (the betas), not the independent variables (the X). Meaning that it is not in the form of “y = beta1 something + beta2 something + beta3 something + …” For example, this is a non-linear regression:

In non-linear regression we approximate the model using first order Taylor series. We can read about polynomial regression in Wikipedia (link).

The last one is segmented regression, where we partition the independent variables into several segments, and for each segment we use linear regression. So instead of 1 long line, the linear regression is several “broken lines”. That is why this technique is known as “broken-stick regression” which we can read in Wikipedia: link. It is also known as “piecewise regression” as the Python implementation is using numpy.piecewise() function, which we can read in Stack Overflow: link.

Multicollinearity, overfitting, extrapolation

At the beginning of this article I also mentioned about these 3 issues when doing linear regression. What are these issues and how do we solve them?

Multicollinearlity means that one of the independent variables is highly correlated to another independent variable. This is a problem because it causes the model to have high variance, i.e. the model coefficients change erratically when there are small changes in the data, causing the model to be unstable.

The solution is to drop one of the multicollinear variables. We can read more about multicollinearity in Wikipedia, including a few other solutions: link.

Overfitting happens when we use high degree polynomial regression. We detect overfitting by comparing the accuracy in the training and test data set. If the accuracy on the training data set is very high (>90%) and the accuracy on the test data set is much lower (a difference of 10% or more) then the model is overfitting (see: link).

The solution is to use regularisation such as Lasso or Ridge (link), using feature selection (link), or using cross validation (link).

Extrapolation is about using the linear, polynomial or non-linear regression model beyond the range of the training and test data. The considerations and real world examples are given in this Medium article by Dennish Ash: link.

The solution is to review the linearity relationship between the independent variable and the predicted variable in the data range where we want to do extrapolation. We review using business sense (not using data), checking if the relationship is still linear outside the data range that we have.

One consideration is that the further the distance to the training and test data range, the more risky the extrapolation. For example, if in the training and test data the independent variable is between 20 and 140, predicting the output for 180 is more risky than predicting the output for 145.

Note on plots in machine learning

Machine learning is a science about data and as such when making plots/graphs we must always make it clear the meaning of each axis. And yet bizarrely during my 2 years in machine learning I encountered so many graphs with the axis not labelled! This irritates me so much. We must label the axis properly, because depending on what the axis are the graph could mean an entirely different thing.

For example: the graph below says heteroscedastic but has no label on either the y axis nor the x axis. So how could we know what those data points are? Is it independent variable against the dependent variable? It turns out that the x axis is the predicted variable and y axis is the error term.

27 May 2021

Ensembles – Odd and Even

Filed under: Machine Learning — Vincent Rainardi @ 7:01 am
Tags:

In machine learning we have a technique called Ensembles, i.e. we combine multiple models. The more models we use, the higher the chance of getting right. That is understandable and expected. But the number of models being odd or even has a significant effect too. I didn’t expect that and in this short article I would like to share it.

I’ll start from the end. Below is the probability of using 2 to 7 models to predict a binary output, i.e. right or wrong. Each model has 75% chance of getting it right i.e. correctly predicting the output.

If we look at the top row (2, 4 and 6 models) the probability of the ensemble getting it right increases, i.e. 56%, 74%, 83%. If we look at the bottom row (3, 5 and 7 models) it also increases, i.e. 84%, 90%, 93%.

But from 3 models to 4 models it is down from 84% to 74% because we have 21% of “Not sure”. This 21% is when 2 models are right and 2 models are wrong and therefore the output is “Not sure”. Therefore we would rather use 3 models than 4 models because 3 models is better than 4 models, in terms of the chance of getting it right (correctly predicting the output).

The same thing happen between 5 and 6 models. The probability of the ensemble getting it right decreases from 90% to 83% because we have 13% of “Not sure”. This is where 3 models are right and and 3 model are wrong so the output is “Not sure”.

So when using ensembles to predict binary output we need to use odd number of models, because they don’t have “Not sure” where equal number of models are right and wrong.

We also need to remember that each model must have >50% chance of predicting the correct result. Because if not the model ensemble is weaker than the individual model. For example, if each model has only 40% of predicting the correct output, then using 3 models gives us 35%, 5 models 32% and 7 models 29% (see below).

The second thing that we need to remember when making an ensemble of models is that the models need to be independent, meaning that they have different areas of strength.

We can see that this “independent” principle is reflected in the calculation of each ensemble. For example: for 3 models, when all 3 models get it right, the probability is 75% x 75% x 75% (see below). This 75% x 75% x 75% means that the 3 models are completely independent to each other.

This “completely independent” is a prefect condition and it doesn’t happen in reality. So in the above case the probability of each of the 3 models getting it right is lower than 42%. But we have to try to get the models independent the best we can. Meaning that we need to get them as different as possible, with each model should have their own areas of speciality, their own areas of strength.

24 May 2021

SVM with RBF Kernel

Filed under: Machine Learning — Vincent Rainardi @ 5:20 pm
Tags:

Out of all machine learning algorithms, SVM with RBF Kernel is the one that fascinates me the most. So in this article I am going to try to explain what it is, and why it works wonders.

I will begin by explaining a problem, and how this algorithm solves that problem.

Then I will explain what it is. SVM = Support Vector Machine, and RBF = Radial Basis Function. So I’ll explain what a support vector is, what a support vector machine is, what a kernel is and what a radial basis function is. Then I’ll combine them all and give a overall picture of what SVM with RBF Kernel is.

After we understand what it is, I’m going to briefly explain how it works.

Ok let’s start.

The Problem

We need to clasify 1000 PET scan images into cancer and benign. Whether it is cancer or benign is affected by two variable, X and Y. Fortunately the cancer and benign scans are linearly separable like this:

Figure 1. Linearly separable cancer and benign scans

We call this space a “linear space”. In this case we can find the equation of a line* which separate cancer and benign scans. Because it is linearly separable we can use linear machine learning algorithms.

The problem is when the data set is not linearly separable, like this:

Figure 2. Cancer and benign scans which are not linearly separable

In this case it is separable by an ellipse. We call this space a non linear space. We can find the equation for the ellipse but it won’t work with linear machine learning algorithms.

The Solution

The solution to this problem is to transform the non linear space into a linear space, like this:

Figure 3. Transforming a non-linear space into a linear space

Once it is in a linear space, we can use linear machine learning algorithms.

Why is it important to be able to use a linear ML algorithms? Because there are many popular linear ML algorithms which work well.

What is a Support Vector?

The 4 data points A, B, C, D in figure 4 below are called support vectors. They are the data points located nearest to the separator line.

Figure 4. Support Vectors

They are called support vector because they are the ones which determine where the separator line is located. The other data points don’t matter, they don’t affect where the spearator line is located. Even if we remove all the other data points the separator line will still be the same, as illustrated below:

Figure 5. Support Vectors affect the separator line

What is a Support Vector Machine?

Support Vector Machine is a machine learning algorithm which uses the support vector concept above to classify data. One of the main features of SVM is that it allows some data points to be deliberately misclassified, in order to achieve a higher overall accuracy.

Figure 6. SVM deliberately allows misclassifiction

In figure 6, data point A is deliberately misclassified. The SVM algorithm ignores data point A, so it can better classify all the other data points. As a result it achieves better overall accuracy, compared to if it tries to include data point A. This principle makes SVM work well when the data is partially intermingled.

What is a Kernel?

A kernel is a transformation from one space to another. For example, in figure 7 we transform the data points from variable X and Y to variable R and T. We can take variable R for example as “the distance from point A”.

Figure 7 Kernel – transforming data from one space to another

To be more precise, transformation like this is called a “Kernel Function” not just a Kernel.

What is a Radial Function?

Radial function is a function whose value depends on the distance from the point of origin (x = 0 and y = 0).

For example if the distance to origin is constant, then it is a circle if we plot it on X and Y axes. In figure 8 below we can see a circle with r = 2, where r is the distance from origin. In this case r is the radius of the circle and point O is the point of origin.

Figure 8 Radial Function

What is a Radial Basis Function (RBF)?

A Radial Basis Function (RBF) is a radial function where the reference point is not the origin. For example, distance of 3 from point (5,5) is like this:

Figure 9. Radial Basis Function

We can sum multiple RBFs to get shapes with multiple centres like this:

So what is SVM with RBF Kernel?

SVM with RBF Kernel is a machine learning algorithm which is capable to classify data points separated with radial based shapes like this:

Figure 11 SVM with RBF kernel
(source: http://qingkaikong.blogspot.com/2016/12/machine-learning-8-support-vector.html)

And that ability in machine learning is amazing because it can “hug” the data points closely, precisely separating them out.

References

  1. SVM: https://en.wikipedia.org/wiki/Support-vector_machine
  2. Kernel: https://en.wikipedia.org/wiki/Positive-definite_kernel
  3. RBF: https://en.wikipedia.org/wiki/Radial_basis_function

22 May 2021

Tokenisation

Filed under: Machine Learning — Vincent Rainardi @ 7:38 am
Tags:

One my teachers said to me once: the best way to learn is by writing it. It’s been 30 years and his words still rings true in my head.

One of the exciting subjects in machine learning is natural language (NL). There are 2 main subjects in NL: natural language processing (NLP) and natural language generation (NLG).

  • NLP is about processing and understanding human languages in the form of a text or voice. For example: reading a book, an email or a tweet, listening to people talking, singing, radio, etc.
  • NLG is about creating a text or voice in human languages. For example: creating a poetry or a news article, generating a voice which says some sentences, singing a song or a radio broadcast.

My article today is about NLP. One specific part of NLP. In NLP we have 3 levels of processing: lexical processing, syntactic processing and semantic processing.

  • Lexical processing is looking at a text without thinking about the grammar. We don’t differentiate if a word is a noun or a verb. In other words we don’t consider the role or position of that word in a sentence. For example, we break a text into paragraphs, paragraph into sentences and sentence into words. We change each word to their root form, e.g. we change “talking”, “talked”, “talks” to “talk”.
  • Syntactic processing is looking at a text to understand the role or function of each word. The meaning of a word depends on its role in the sentence. For example: subject, predicate or object. A noun, a verb, an adverb or an adjective. Present tense, past tense or in the future.
  • Semantic processing is trying to understand the meaning of the text. We try to understand the meaning of each word, each sentence, each paragraph and eventually the whole text.

My article today is about lexical processing. One specific part of lexical processing, called tokenisation.

Tokenisation is the process of breaking a text into smaller pieces. For example: breaking sentences into words. The sentence: “Are you ok?” she asked, can be tokenised into 5 words: are, you, ok, she, asked.

We can tokenise text in various different ways: (source: link)

  • characters
  • words
  • sentences
  • lines
  • paragraphs
  • N-grams

N-gram tokenisation is about breaking text into tokens with N number of characters in each token.
So 3-gram means 3 characters in each token. (source: link)

For example: the word “learning” can be tokenised into 3-gram like this: lea, ear, arn, rni, nin, ing.

One of the most popular libraries in NLP is the Natural Language Toolkit (NLTK). In NLTK library we have a few tokenisers: word tokeniser, sentence tokeniser, tweet tokerniser and regular expression tokeniser. Let’s go through them one by one.

Word Tokenizer

In NLTK we have a word tokeniser called word_tokenize. This tokeniser breaks text into word not only on spaces but also on apostrophy, greater than, less than and brackets. Periods, commas and colons are tokenised separately.

Python code – print the text:

document = "I'll do it don't you worry. O'Connor'd go at 3 o'clock, can't go wrong. " \
         + "Amazon's delivery at 3:15, but it's nice'. A+B>5 but #2 is {red}, (green) and [blue], email: a@b.com" 
print(document)  
I'll do it don't you worry. O'Connor'd go at 3 o'clock, can't go wrong. Amazon's delivery at 3:15, but it's nice'. A+B>5 but #2 is {red}, (green) and [blue], email: a@b.com

Tokenise using a space:

words = document.split()
print(words)
["I'll", 'do', 'it', "don't", 'you', 'worry.', "O'Connor'd", 'go', 'at', '3', "o'clock,", "can't", 'go', 'wrong.', "Amazon's", 'delivery', 'at', '3:15,', 'but', "it's", "nice'.", 'A+B>5', 'but', '#2', 'is', '{red},', '(green)', 'and', '[blue],', 'email:', 'a@b.com']

Tokenise using word_tokenize from NLTK:

from nltk.tokenize import word_tokenize
words = word_tokenize(document)
print(words)
['I', "'ll", 'do', 'it', 'do', "n't", 'you', 'worry', '.', "O'Connor", "'d", 'go', 'at', '3', "o'clock", ',', 'ca', "n't", 'go', 'wrong', '.', 'Amazon', "'s", 'delivery', 'at', '3:15', ',', 'but', 'it', "'s", 'nice', "'", '.', 'A+B', '>', '5', 'but', '#', '2', 'is', '{', 'red', '}', ',', '(', 'green', ')', 'and', '[', 'blue', ']', ',', 'email', ':', 'a', '@', 'b.com']

We can see above that using spaces we get these:

I’ll   don’t   worry.   O’Connor’d   o’clock,   can’t   Amazon’s   3:15   it’s   A+B>5   #2   {red},   (green)   [blue],   email:   a@b.com

Whereas using word_tokenise from NLTK we get these:

I   ‘ll   n’t   worry   .   O’Connor   ‘d   o’clock   ,   ca   n’t   Amazon   ‘s   3:15   it   ‘s   A+B   >   5   #   2   {  red  }  ,  (  green  )  [  blue  ]  ,   email   :   a   @   b.com

Notice that using NLTK these become separate tokens whereas using spaces they are not:

‘ll  n’t  .  ca  O’Connor  ‘d  o’clock  ‘s  A+B  >  #  {}  ()  []  ,  :   @

Sentence Tokenizer

In NLTK we have a sentence tokeniser called sent_tokenize. This tokeniser breaks text into sentences not only on periods but also on ellipsis, question marks and exclamation mark.

Python code – split on period:

document = "Oh... do you mind? Sit please. She said {...} go! So go."
words = document.split(".")
print(words)
['Oh', '', '', ' do you mind? Sit please', ' She said {', '', '', '} go! So go', '']

Using sent_tokenize from NLTK:

from nltk.tokenize import sent_tokenize
sentences = sent_tokenize(document)
print(sentences)
['Oh... do you mind?', 'Sit please.', 'She said {...} go!', 'So go.']

Notice that NLTK breaks the text on periods (.), ellipsis (…),  question mark (?) and exclamation mark (!).

Also notice that if we use period we get a space in the beginning for the sentence. Using NLTK we don’t.

Tweet Tokenizer

In NLTK we have a tweet tokeniser. We can use this tokeniser to break a tweet into tokens, considering the smileys, emojis and hashtags.

Python code – using NLTK word tokeniser:

document = "I watched it :) It was gr8 <3 😍 #bingewatching"
words = word_tokenize(document)
print(words)
['I', 'watched', 'it', ':', ')', 'It', 'was', 'gr8', '<', '3', '😍', '#', 'Netflix']

Using NLTK tweet tokeniser:

from nltk.tokenize import TweetTokenizer
tknzr = TweetTokenizer()
tknzr.tokenize(document)
['I', 'watched', 'it', ':)', 'It', 'was', 'gr8', '<3', '😍', '#Netflix']

Notice that using tweet tokeniser we get smileys like <3 and hashtags as a token, whereas using word tokeniser the < and # are split up.

Regular Expression Tokenizer

In NLTK we have a regular expression tokeniser. We can use this tokeniser to break a tweet into tokens, considering the smileys, emojis and hash tags.

Python code:

from nltk.tokenize import regexp_tokenize
document = "Watched it 3x in 2 weeks!! 10 episodes #TheCrown #Netflix"
hashtags = "#[\w]+"
numbers  = "[0-9]+"

regexp_tokenize(document, hashtags)
['#TheCrown', '#Netflix']

regexp_tokenize(document, numbers)
['3', '2', '10']

Notice that using regular expression tokeniser we can extract hash tags and numbers. We can also use it to extract dates, email address, monetary amount.

 

Blog at WordPress.com.