Handling categorical features with python
As a data scientist, you may very frequently encounter categorical variable in your dataset like location, car model, gender, etc. You cannot directly use them in our machine learning algorithm as these algorithms only understand numbers. There are various techniques to convert these categorical features to numerical features but that is not the focus of this post, this post is about how to implement these techniques in python. I will talk a little bit about these techniques but won’t go into too much depth, I will emphasise more on various ways how you can implement this technique in python.
#What are Categorical variables?
Categorical variables are the qualitative variables which have non-numeric values like gender it can either be male/female value, even if they are numerical values feature description says it’s categorical that means these numerical values are not mathematically related. Sometimes the answer to the questions is like yes/no, ugly/nice/ok/pretty, good/bad, etc. We can say that answer to the question is from the set of predefined possibilities. Qualitative/categorical variables take values from these set of possibilities. These variables often prove to be of great importance and can boost the accuracy of the model to a considerable extent. There are two types of categorical variable :
#Ordinal/Ordered categorical variable
Order of these type of variables matter, for example, movie review values can be good, average and bad in this case average is between good and bad, rank result of a race, in such cases order of variable have some information and if the order is not followed can produce misleading results.
#Nominal categorical variable
For these type of variables order doesn’t matter, like the type of seat values can be economic/business, gender (male/female), etc. Order of the variable makes no difference in their interpretation.
It depends on the data we have whether to interpret a categorical variable as nominal or ordinal, misunderstand the variable can lead to a false result. So it really important to think carefully before diving into implementation.
#Simple approach to encode categorical features
The two approaches which we are going to use to convert the categorical variable to its numerical equivalent form are as follows:
#Label Encoding
A simple approach to convert categorical variable to numerical variable will to assign a unique number to each possible outcome of the variable and replace the variables values with its corresponding number. But this technique can only be used for the ordinal categorical variable, once you know the order of the values of the variable, as the order of the values matter the numbers assigned to values of categorical values should also be sorted in ascending or descending order, doesn’t matter which order you choose. So, for example, a movie review variable may have five possible in-order values (excellent, awesome, good, bad, burnt it) so assigned values for the outcome will be from 5 to 1, 5 been excellent and burn it been 1. In this case review variable, a value of one data point is 4 (awesome) and other data point is 2 bad and if we take average the outcome will be 3(good) which make sense. This might not be the case for the nominal variable which is why we cannot use this method for a nominal variable.
#One Hot Encoding/One of K scheme
The other approach is called the one hot encoding, where a categorical variable is converted into a binary vector, each possible value of the categorical variable becomes the variable itself with default value of zero and the variable which was the value of the categorical variable will have the value 1. This concept is explained with the example shown below.
Table before applying one hot encoding transformation
name | gender |
---|---|
Roshan | male |
Anna | female |
Hussain | male |
Ashwini | female |
Table after applying one hot encoding transformation
name | male | female |
---|---|---|
Roshan | 1 | 0 |
Anna | 0 | 1 |
Hussain | 1 | 0 |
Ashwini | 0 | 1 |
#Implementing Label Encoding
We saw what label encoding is above, not always you are going to get categorical variable in string form, it is possible you might even encounter random numerical values(this is typically the case in competitions) but still it’s a categorical feature. To deal with such situation there is a utility class LabelEncoder in preprocessing module in the sklearn package it can handle categorical variable in both numerical and string form. Fire-up a ipython console and try the code below
1 | from sklearn.preprocessing import LabelEncoder |
#Implementing One Hot Encoding/One of K-Scheme
You may encounter data in various form data type number/string to deal with these situations there are of utility classes in the sklearn package to convert them in one hot encoding schema. As we discussed earlier categorical variable could be in numerical or string data type, following are two methods to convert a categorical variable to one hot encoding schema:
#SKLearn way
OneHotEncoder utility class provided by the sklearn package can convert numerical values to one hot encoding but we can also deal with string values if use LabelEncoder along with the OneHotEncoder class. The labelencoder class will map the string values of the categorical variable to a number and these number can be converted to one hot encoding by OneHotEncoder. Implementation is shown by the code shown below
1 | In [1]: from sklearn.preprocessing import LabelEncoder, OneHotEncoder |
#Pandas way
If your data is already loaded in pandas then pandas.get_dummies is one very handy method to convert your categorical variable to one hot encoding schema, this method is much convenient then the previous sklearn approach. This method can convert multiple columns in one method call by passing data frame and the columns we want to transform. Below is the example to for what I have just explained.
1 | In [1]: import pandas as pd |
We discussed two libraries sklearn and pandas which helps us to deal with the categorical variable, which one should you prefer? If you prefer sklearn way then you get advantage of chaining transformers and estimators in PipeLine and FeatureUnion to create data pipelines which can makes your whole analysis more manageable, while if you go pandas way you get the simplicity but you will have to implement custom transformer and then chain it in _PipeLine and FeatureUnion.
#Conclusion
We saw the different type of categorical variable and how to encode them so that we can use them in machine learning algorithm along with other feature. You could write your own code to convert the categorical variables in numerical variable but you could leverage existing helpful utility classes/methods provided by some popular ML libraries which can come handy and can save some time in the cleaning of dirty data.