Data Warehousing and Data Science

16 April 2021

Logistic Regression with PCA in Python

Filed under: Data Science,Machine Learning — Vincent Rainardi @ 8:31 pm

Logistic Regression means predicting a catagorical variable, without losing too much information. For example, whether a client will invest or not. JavaTPoint provies a good, short overview on Logistic Regression: link. Jurafsky & Martin from Stanford provide a more detailed view, along with the mathematics: link. Wikipedia provides a comprehensive view, as always: link.

In this article I will be writing how to do Linear Regression in Python. I won’t be explaining what it is, but only how to do it in Python.

PCA means Principal Component Analysis. When we have a lot of variables, we can reduce them using PCA. Matt Berns provide a good overview and resources: link. Lindsay Smith from Otago provides a good academic overview: link. And as always, Wikipedia provies a comprehensive explanation: link.

I think it would be good to kill two birds with on stone. So in this article I will build 2 Logistic Regression models, one with PCA and one without PCA. This way it will provide examples for both cases.

One of the weaknesses of PCA is that we won’t know which variables are the top predictors. To know the top predictors we will have to build the Linear Regression model without PCA. As we don’t use PCA, to reduce the number of variables I will use RFE + manual (see here for an example on reducing variables using RFE + manual on Linear Regression). One of the advantages of PCA is that we don’t need to worry about multicollinearity in the data (highly correlated features). So on the second model where I don’t use PCA, I have to handle the multicollinearity, i.e. remove the highly correlated features using VIF (Variance Inflation Factor).

There are 5 steps:

  1. Data preparation
    • Load and understand the data
    • Fix data quality issues
    • Data conversion
    • Create derived variables
    • Visualise the data
    • Check highly correlated variables
    • Check outliers
    • Handle class imbalance (see here)
    • Scaling the data
  2. Model 1: Logistic Regression Model with PCA
    • Split the data into X and y
    • Split the data into training and test data set
    • Decide the number of PCA components based on the explained variance
    • Train the PCA model
    • Check the correlations between components
    • Apply PCA model to the test data
    • Train the Logistic Regression model
  3. Model evaluation for Model 1
    • Calculate the Area Under the Curve (AUC)
    • Calculate accuracy, sensitivity & specificity for different cut off points
    • Choose a cut off point
  4. Model 2: Logistic Regression Model without PCA
    • Drop highly correlated columns
    • Split the data into X and y
    • Train the Logistic Regression model
    • Reduce the variables using RFE
    • Remove one variable manually based on the P-value and VIF
    • Rebuild the model
    • Repeat the last 2 steps until P value < 0.05 and VIF < 5
  5. Model evaluation for Model 2
    • Calculate the Area Under the Curve (AUC)
    • Calculate accuracy, sensitivity & specificity for different cut off points and choose a cut off point
    • Identify the most important predictors

Step 1 is long and is not the core of this article so I will be skipping Step 1 and go directly into Step 2. Step 1 is common to various ML scenario so I will be writing it in a separate article and put the link in here so you can refer to it. One part in step 1 is about handling class imbalance, which I’ve written here: link.

Let’s start.

Step 2. Model 1: Logistic Regression Model with PCA

# Split the data into X and y
y = high_value_balanced.pop("target_variable")
X = high_value_balanced

# Split the data into training and test data set
from sklearn.model_selection import train_test_split 
X_train, X_test, y_train, y_test = train_test_split(X,y, train_size=0.7,test_size=0.3,random_state=42)

# Decide the number of PCA components based on the retained information
pca = PCA(random_state=88)
pca.fit(X_train)
explained_variance = np.cumsum(pca.explained_variance_ratio_)
plt.vlines(x=80, ymax=1, ymin=0, colors="r", linestyles="--")
plt.hlines(y=0.95, xmax=120, xmin=0, colors="g", linestyles="--")
plt.plot(explained_variance)

We can see above that to retain 95% explained variance (meaning we retain 95% of the information) we need to use 80 PCA components. So we build the PCA model with 80 components.

# Train the PCA model 
pca_final = IncrementalPCA(n_components=80)
df_train_pca = pca_final.fit_transform(X_train)

# Note that the above can be automated like this: (without using plot)
pca_final = PCA(0.95)
df_train_pca = pca_again.fit_transform(X_train)

# Check the correlations between components
corr_mat = np.corrcoef(df_train_pca.transpose())
plt.figure(figsize=[15,8])
sns.heatmap(corr_mat)
plt.show()

As we can see in the heatmap above, all of the correlations are near zero (black). This one of the key features of PCA, i.e. the transformed features are not correlated to one another, i.e. their vectors are orthogonal to each other.

# Apply PCA model to the test data
df_test_pca = pca_final.transform(X_test)

# Train the Logistic Regression model
LR_PCA_Learner = LogisticRegression()
LR_PCA_Model = LR_PCA_Learner.fit(df_train_pca, y_train)

Step 3. Model evaluation for Model 1

# Calculate the Area Under the Curve (AUC)
pred_test = LR_PCA_Model.predict_proba(df_test_pca)
"{:2.2}".format(metrics.roc_auc_score(y_test, pred_test[:,1]))

# Calculate the predicted probabilities and convert to dataframe
y_pred = LR_PCA_Model.predict_proba(df_test_pca)
y_pred_df = pd.DataFrame(y_pred)
y_pred_1 = y_pred_df.iloc[:,[1]]
y_test_df = pd.DataFrame(y_test)

# Put the index as ID column, remove index from both dataframes and combine them
y_test_df["ID"] = y_test_df.index
y_pred_1.reset_index(drop=True, inplace=True)
y_test_df.reset_index(drop=True, inplace=True)
y_pred_final = pd.concat([y_test_df,y_pred_1],axis=1)
y_pred_final = y_pred_final.rename(columns = { 1 : "Yes_Prob", "target_variable" : "Yes" } )
y_pred_final = y_pred_final.reindex(["ID", "Yes", "Yes_Prob"], axis=1)

# Create columns with different probability cutoffs 
numbers = [float(x)/10 for x in range(10)]
for i in numbers:
    y_pred_final[i]= y_pred_final.Yes_Prob.map(lambda x: 1 if x > i else 0)

# Calculate accuracy, sensitivity & specificity for different cut off points
Probability = pd.DataFrame( columns = ['Probability', 'Accuracy', 'Sensitivity', 'Specificity'])
for i in numbers:
    CM = metrics.confusion_matrix(y_pred_final.Yes, y_pred_final[i] )
    Total = sum(sum(CM))
    Accuracy    = (CM[0,0]+CM[1,1])/Total
    Sensitivity = CM[1,1]/(CM[1,1]+CM[1,0])
    Specificity = CM[0,0]/(CM[0,0]+CM[0,1])
    Probability.loc[i] =[ i, Accuracy, Sensitivity, Specificity]
Probability.plot.line(x='Probability', y=['Accuracy','Sensitivity','Specificity'])

Choose a cut off point

Different applications have different priorities when choosing the cut off point. For some applications, the true positives is more important and the true negative doesn’t matter. In this case we should use sensitivity as the evaluation criteria, and choose the cut off point to make the sensitivity as high as possible e.g. probability = 0.1 which gives sensitivity almost 100%.

For some applications, the true negative is more important and the true positive doesn’t matter. In these case we should use specificity as the evaluation criteria, and choose the cut off point to make the sensitivity as high as possible, e.g. probability = 0.9 which gives specificity almost 100%.

For most applications the true positive and the true negative are equally important. In this case we should use accuracy as the evaluation criteria, and choose the cut off point to make the accuracy as high as possible, e.g. probability = 0.5 which gives accuracy about 82%.

In most cases it is not the above 3 extreme, but somewhere in the middle, i.e. the true positive is very important but the true negative also matters, even though not as important as the true positive. In this case we should choose the cut off point to make sensitivity high but the specificity not too low. For example, probability = 0.3 which gives sensitivity about 90%, specificity about 65% and accuracy about 80%.

So let’s do the the last paragraph, cut off point = 0.3

y_pred_final['predicted'] = y_pred_final.Churn_Prob.map( lambda x: 1 if x > 0.3 else 0)
confusion_matrix = metrics.confusion_matrix( y_pred_final.Yes, y_pred_final.predicted )
Probability[Probability["Probability"]==0.3]

We get sensitivity = 91.8%, specificifity = 65.9%, accuracy = 78.9%

Step 4. Model 2: Logistic Regression Model without PCA

# Drop highly correlated columns
data_corr = pd.DataFrame(data.corr()["target_variable"])
data_corr = data_corr[data_corr["target_variable"] != 1]
churn_corr.sort_values(by=["abs_corr"], ascending=False).head(5)

# Split the data into X and y, and normalise the data 
y = data.pop("target_variable")
normalised_data = (data - data.mean())/data.std()
X = normalised_data

# Train the Logistic Regression model
X_train, X_test, y_train, y_test = train_test_split(X,y, train_size=0.7,test_size=0.3,random_state=88)
LR_model = LogisticRegression(max_iter = 200)
LR_model.fit(X_train, y_train)

# Reduce the variables using RFE
RFE_model = RFE(LR_model, n_features_to_select = 15)
RFE_model = RFE_model.fit(X_train, y_train)
selected_columns = X_train.columns[RFE_model.support_]

# Rebuild the model
X_train_RFE = X_train[selected_columns]
LR_model.fit(X_train_RFE, y_train)
LR2 = sm.GLM(y_train,(sm.add_constant(X_train_RFE)), family = sm.families.Binomial())
LR_model2 = LR2.fit()
LR_model2.summary()

# Check the VIF
from statsmodels.stats.outliers_influence import variance_inflation_factor
Model_VIF = pd.DataFrame()
Model_VIF["Variable"] = X_train_RFE.columns
number_of_variables = X_train_RFE.shape[1]
Model_VIF["VIF"] = [variance_inflation_factor(X_train_RFE.values, i) for i in range(number_of_variables)]
Model_VIF.sort_values(by="VIF", ascending=False)

# Remove one variable manually based on the P-value and VIF
X_train_RFE.drop(columns=["column8"], axis=1, inplace=True)
LR2 = sm.GLM(y_train,(sm.add_constant(X_train_RFE)), family = sm.families.Binomial())
LR_Model2 = LR2.fit()
LR_Model2.summary()

Repeat the last 2 steps until P value < 0.05 and VIF < 5.

Step 5. Model evaluation for Model 2

# Calculate the Area Under the Curve (AUC)
df_test = LR_Model2.transform(X_test)
pred_test = LR_Model2.predict_proba(df_test)
"{:2.2}".format(metrics.roc_auc_score(y_test, pred_test[:,1]))

Calculate accuracy, sensitivity & specificifity for different cut off points and choose a cut off point:

See “choose cut off point” section above

Identify the most important predictors:

From the model output above, i.e. “LR_Model2.summary()” we can see the most important predictors.

15 April 2021

Linear Regression in Python

Filed under: Data Science,Machine Learning — Vincent Rainardi @ 6:53 am

Linear Regression is about predicting a numerical variable. There are 5 steps when we do it in Python:

  1. Prepare the data
    • Load and understand the data
    • Fix data quality issues
    • Remove non required columns
    • Visualise and analyse the data
    • Identify highly correlated columns and remove them
    • Create derived variables
    • Create dummy variables for categorical variables
  2. Build the model
    • Split the data into training data and test data
    • Scale the numerical variables in the training data
    • Split the data into y and X
    • Automatically choose top 15 features using RFE (Recursive Feature Elimination)
    • Manually drop features based on P-value and VIF (Variance Inflation Factor)
    • Rebuild the model using OLS (Ordinary Least Squares)
    • Repeat the last 2 steps until all variables have P-value < 0.05 and VIF < 5
  3. Check the distribution of the error terms
  4. Make predictions
    • Scale the numberical variables in the test data
    • Remove the dropped features in the test data
    • Make predictions based on the test data
  5. Model evaluation
    • Plot the predicted vs actual values
    • Calculate R2, Adjusted R2 and F statistics
    • Create the linear equation for the best fitted line
    • Identify top predictors

Below is the Python code for the above steps. I will skip step 1 (preparing the data) and directly go to step 2 because step 1 is common to all ML models (not just linear regression) so I will write it in a separate article.

2. Build the model

# Split the data into training data and test data 
from sklearn.model_selection import train_test_split
np.random.seed(0)
df_train, df_test = train_test_split(df_data, train_size = 0.7, test_size = 0.3, random_state = 100)

# Scale the numerical variables in the training data
from sklearn.preprocessing import MinMaxScaler
minmax_scaler = MinMaxScaler()
continuous_columns = ["column1", " column2", " column3", " column4"]
df_train[continuous_columns] = minmax_scaler.fit_transform(df_train[continuous_columns])

# Split the data into y and X
y_train = df_train.pop("count")
x_train = df_train

# Automatically choose top 15 features using RFE
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
data_LR = LinearRegression()
data_LR.fit(x_train, y_train)
data_RFE = RFE(data_LR, 15)             
data_RFE = data_RFE.fit(x_train, y_train)

# Check which columns are selected by RFE and which are not
list(zip(x_train.columns,data_RFE.support_,data_RFE.ranking_))
selected_columns = x_train.columns[data_RFE.support_]
unselected_columns = x_train.columns[~data_RFE.support_]

# Train the model based on the columns selected by RFE
# and check the coefficients, R2, F statistics and P values
x_train = x_train[selected_columns] 
import statsmodels.api as data_stat_model  
x_train = data_stat_model.add_constant(x_train) 
data_OLS_result = data_stat_model.OLS(y_train, x_train).fit() 
data_OLS_result.params.sort_values(ascending=False) 
print(data_OLS_result.summary()) 

# Calculate the VIF (Variance Importance Factor) 
from statsmodels.stats.outliers_influence import variance_inflation_factor
data_VIF = pd.DataFrame()
data_VIF['variable'] = x_train.columns
number_of_variables = x_train.shape[1]
data_VIF['VIF'] = [variance_inflation_factor(x_train.values, i) for i in range(number_of_variables)]
data_VIF.sort_values(by="VIF", ascending=False) 

# Drop one column and rebuild the model
# And check the coefficients, R-squared, F statistics and P values
x_train.drop(columns=["column5"], axis=1, inplace=True)
x_train = bike_stat_model.add_constant(x_train)
data_OLS_result = data_stat_model.OLS(y_train, x_train).fit()
print(data_OLS_result.summary())

# Check the VIF again
data_VIF = pd.DataFrame()
data_VIF['variable'] = x_train.columns
number_of_variables = x_train.shape[1]
data_VIF['VIF'] = [variance_inflation_factor(x_train.values, i) for i in range(number_of_variables)]
data_VIF.sort_values(by="VIF", ascending=False) 

Keep dropping one column at a time and rebuild the model until all variables have P value < 0.05 and VIF < 5.

The result from print(data_OLS_result.summary()) is something like this, where we can see the R2 and Adjusted R2 of the training data:

                            OLS Regression Results                            
==============================================================================
Dep. Variable:                  count   R-squared:                       0.841
Model:                            OLS   Adj. R-squared:                  0.838
Method:                 Least Squares   F-statistic:                     219.8
Date:                Tue, 29 Dec 2020   Prob (F-statistic):          6.03e-190
Time:                        09:07:14   Log-Likelihood:                 508.17
No. Observations:                 510   AIC:                            -990.3
Df Residuals:                     497   BIC:                            -935.3
Df Model:                          12                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const          0.2444      0.028      8.658      0.000       0.189       0.300
column1        0.2289      0.008     28.108      0.000       0.213       0.245
column2        0.1258      0.011     10.986      0.000       0.103       0.148
column3        0.5717      0.025     22.422      0.000       0.522       0.622
column4       -0.1764      0.038     -4.672      0.000      -0.251      -0.102
column5       -0.1945      0.026     -7.541      0.000      -0.245      -0.144
column6       -0.2362      0.026     -8.946      0.000      -0.288      -0.184

3. Check the distribution of the error terms

In linear regression we assume that the error term follows normal distribution. So we have to check this assumption before we can use the model for making predictions. We check this by looking at the histogram of the error term visually, making sure that the error terms are normally distributed around zero and that the left and right side are broadly similar.

fig = plt.figure()
y_predicted = data_OLS_result.predict(x_train)
sns.distplot((y_train - y_predicted), bins = 20)
fig.suptitle('Error Terms', fontsize = 16)
plt.show()

4. Making predictions

# Scale the numberical variables in the test data (just transform, no need to fit)
df_test[continuous_columns] = minmax_scaler.transform(df_test[continuous_columns])

# Split the test data into X and y
y_test = df_test.pop('count')
x_test = df_test

# Remove the features dropped by RFE and manual process
x_test = x_test[selected_columns]
x_test = x_test.drop(["column5", "column6", "column7"], axis = 1)

# Add the constant variable to test data (because by default stats model line goes through the origin)
x_test = data_stat_model.add_constant(x_test)

# Make predictions based on the test data
y_predicted = data_OLS_result.predict(x_test)

5. Model Evaluation

Now that we have built the model, and use the model to make prediction, we need to evaluate the performance of the model, i.e. how close the predictions are to the actual values.

# Compare the actual and predicted values
fig = plt.figure()
plt.scatter(y_test, y_predicted)
fig.suptitle('Compare actual (Y Test) vs Y predicted', fontsize = 16)
plt.xlabel('Y Test', fontsize = 14)
plt.ylabel('Y Predicted', fontsize = 14)      
plt.show()
  • We can see here that the Y Predicted and the Y Test have linear relation, which is what we expect.
  • There are a few data points which deviates from the line, for example the one on the lower left corner.

We can now calculate the R2 score on the test data like this:

from sklearn.metrics import r2_score
r2_score(y_test, y_predicted)

We can also calculate the Adjusted R2 like this:

Based on the coefficient values from the OLS regression result we construct the linear equation for the best fitted line, starting from the top predictors like this:

                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const          0.2444      0.028      8.658      0.000       0.189       0.300
column1        0.2289      0.008     28.108      0.000       0.213       0.245
column2        0.1258      0.011     10.986      0.000       0.103       0.148
column3        0.5717      0.025     22.422      0.000       0.522       0.622
column4       -0.1764      0.038     -4.672      0.000      -0.251      -0.102
column5       -0.1945      0.026     -7.541      0.000      -0.245      -0.144
column6       -0.2362      0.026     -8.946      0.000      -0.288      -0.184

y = 0.2444 + 0.5717 column3 – 0.2362 column6  + 0.2289 column1 – 0.1945 column5 – 0.1764 column4 + …

Based on the absolute value of the coefficients we can see that the top 3 predictors are column3, column6 and column1. It is very useful in any machine learning project to know the top predictors, i.e. the most influencing features because then we can take business action to ensure that those features are maximised (or minimised if the coefficient is negative).

Now we have the linear regression equation we can use this equation for predict the target variable for any given input values.

7 April 2021

Handling Class Imbalance

Filed under: Data Science,Machine Learning — Vincent Rainardi @ 7:17 am

In this article I will explain a few ways to treat class imbalance in machine learning. I will also give some examples in Python.

What is class imbalance?

Imagine if you have a data set containing 2 classes: 100 class A and 100 class B. This is called a balanced data set. But if those 2 classes are 5000 class A and 100 class B that is an imbalanced data set. This is not limited to 2 classes, but can happen on more than 2 classes. For example: class A and B both have 5000 members, whereas class C and D both have 100 members.

In an imbalance data set, the class with fewer members is called the minority class. The class with much more members is called the majority class. So if class A has 5000 members and class B 100 members, class A is the majority class and class B is the minority class.

Note that the “class” here is the target variable, not the independent variable. So the target variable is a categorical variable, not a continuous variable. A case where the target data set has 2 classes like above is called “binary classification” and it is quite common in machine learning.

At what ratio it is called class imbalance?

There is no exact definition on the ratio. If class A is 20% of class B I would call it imbalance. Whereas if class A is 70% of class B I would call it balance. 50% I would say is a good bet. It is wrong to dwell on finding the precise ratio range because each data set and each ML algorithm is diffferent. Some cases have bad results at 40%, some cases are ok with 40%.

Why class imbalance occurs

Some data is naturally imbalance, because one class happens rarely in nature, whereas the other happens frequently. For example: cancer, fraud, spam, accidents. The number of people with cancer are naturally much less than those without. The number of fraudulant credit card payments are naturally much less than good payments. The number of spam emails are much less than good emails. The number of flight having accidents are naturally much less than good flights.

Why class imbalance needs to be treated

Some machine learning algorithms don’t work well if the target variable is imbalanced, because during training the majority class would be favoured. As a result the model would be skewed toward the majority class. This situation is an issue because in most cases what we are interested in is predicting the minority class. For example: predicting that a transaction is a fraud, or that an email is a spam, is more important than predicting the majority class.

That is the reason why class imbalance needs to be treated. Because the model would be skewed towards the majority class, and we need to predict the minority class.

How to treat class imbalance

We resolve this situation by oversampling the minority class or by undersampling the majority class.

Oversampling the minority class means we randomly choose sample data from the minority class many times, whereas on the majority class we don’t do anything.

For example if class A has 5000 members and class B has 100 members, we resample class B 4950 times. Meaning that we pick data randomly from class B 4950 times. Effectively it is like duplicating class B data 50 times.

Undersampling the minority class means that we randomly selecting data from the majority class as many times as the minority class. In the above example we randomly pick 100 samples from class A, so that both class A and class B have 100 members.

Apart from randomly selecting data there are many other techniques, including:

  • Creating a new samples (called synthetic data)
  • Selecting samples not randomly but favouring samples which are misclassified
  • Selecting samples not randomly but favouring samples which resembles the other class

Jason Brownlee explained several other techniques such as SMOTE, Borderline Oversampling, CNN, ENN, OSS in this article: link.

Python examples

1. Random Oversampling

# Import resample from the Scikit Learn library
from sklearn.utils import resample

# Put the majority class and minority class on separate dataframes
majority_df = df[df["fraud"]==0]
minority_df = df[df["fraud"]==1] 

# Oversampling the minority class randomly
new_minority_df = resample( minority_df, replace = True, 
                            n_samples = len(majority_df), 
                            random_state = 0 )

# Combine the new minority class with the majority class
balanced_df = pd.concat([majority_df, new_minority_df])

2. Synthetic Minority Oversampling Technique (SMOTE)

# Import SMOTE from the Imbalance Learn library
from imblearn.over_sampling import SMOTE

# Oversampling the minority class using SMOTE
s = SMOTE()
X_new, y_new = s.fit_resample(X, y)

Jason Brownlee illustrates very well which part of the minority class got oversampled by SMOTE in this article: link. Please notice how the minority class differs on the first 3 plots in his article. We can see clearly how SMOTE with random undersampling is better than SMOTE alone or random undersampling alone.

6 April 2021

Natural Language Processing (NLP)

Filed under: Data Science,Machine Learning — Vincent Rainardi @ 8:15 am

NLP is different to all other machine learning areas. Machine learning usually deals with mathematics, with numbers. It is about finding a pattern in the numbers, and make a prediction. The root of analysis is mathematical such as matrix, vectors, statistics, probability and calculus. But NLP is about words and sentences which is is very different.

We are now used to Alexa, Siri and Google able to understand us and answer us back in a conversation (5 years ago it wasn’t like that). When we type a reply to an email in Gmail or a message in Linked In we are now used to receiving suggestions about what we are going to type. And when we login to British Gas, or online banking or online retail shop we now find chat bots with whom we will be able have a useful conversation. Much better than 5 years ago. There is no doubt there has been a significant advancements in this area.

The processing of language, be it voice or text, are done in 3 levels. The bottom level is lexical analysis, where ML deals with each word in isolation. The middle level is syntax analysis, where ML analyses the words within the context of the sentence and the grammar. The top level is semantic analysis where ML tries to understand the meaning of the sentence.

To do lexical analysis we start with regular expression. We use regular expression to find words within a text, and to replace them with another words. Then we learn how to identify and remove stop words such as and, the, a which occur frequently but don’t provide useful information during lexical analysis. The third step is learning how to break the text into sentences and into words. And finally for each word we try to find the base word either using stemming, lemmatisation or soundex.

Stemming is a process of removing prefixes and suffixes like “ing” and “er” from “learning” and “learner” to get the base word which is learn. Lemmatisation is a processes of changing a word to its root, e.g. from “went” to “go”, and from “better”, “well”, “best” to “good”. Soundex is a 4-character code that represents the pronounciation of a word, rather than its spelling.

The syntax analysis is done by tagging each word as noun, verb, adjective, etc. (called “part of speech”). The tagging is done by parsing (breaking up) the sentences into groups of words (phrases), analysing the grammatical patterns, and considering the dependencies between words.

Semantic analysis is about understanding the meaning of the words and sentences by looking at the structure of the sentence and the word tagging. Words such as “Orange” can mean colour, a fruit or a area, and “Apple” can mean a fruit or a company, depending on the sentence. In semantic analysis we either assign predefined categories to a text (for example for sentiment analysis, for classifying messages or for chat bots) or pull out a specific information from a text (for example for extracting certain terms from IRS contracts, or other documents).

Blog at WordPress.com.