Data Sprint #7: Getting Started Code

:hammer_and_wrench: Contribute: Found a typo? Or any other change that could improve the notebook tutorial? Please consider sending us a pull request in the public repo of the notebook here

Getting Started Code For Data Sprint #7 on DPhi

Author: Manish KC

Loading Libraries

All Python capabilities are not loaded to our working environment by default (even they are already installed in your system). So, we import each and every library that we want to use.

In data science, numpy and pandas are most commonly used libraries. Numpy is required for calculations like means, medians, square roots, etc. Pandas is used for data processin and data frames. We chose alias names for our libraries for the sake of our convenience (numpy --> np and pandas --> pd).

Note: You can import all the libraries that you think will be required or can import it as you go along.

Here we are importing two libraries - numpy and pandas

import numpy as np        # Fundamental package for linear algebra and multidimensional arrays
import pandas as pd       # Data analysis and manipultion tool

Loading Dataset

Pandas module is used for reading files.

You can learn more about pandas here

# In read_csv() function, we have passed the location to where the files are located in the dphi official github page.
bank_marketing_data  = pd.read_csv("https://raw.githubusercontent.com/dphi-official/Datasets/master/bank_marketing_data/training_set_label.csv" )

What do you need to do now?

  • Perform EDA and Data Visualization :eyes: to understand the data. Learn more about EDA here. Learn more about data visualization here
  • Clean the data if required (like removing or filling missing values, treat outliers, etc.). Learn more about handling missing values here
  • Perform Data Preprocessing if you feel it’s required. Learn one hot encoding here.

Separating Input Features and Output Features

Before building any machine learning model, we always separate the input variables and output variables. Input variables are those quantities whose values are changed naturally in an experiment, whereas output variable is the one whose values are dependent on the input variables. So, input variables are also known as independent variables as its values are not dependent on any other quantity, and output variable/s are also known as dependent variables as its values are dependent on other variable i.e. input variables. Like here in this data, we want to predict if the client would subscribe to the product or not (i.e. y), so the y (this is a variable in the dataset) is our target variable i.e. y and remaining features are input variable.

By convention input variables are represented with ‘X’ and output variables are represented with ‘y’.

# Input/independent variables
X = bank_marketing_data.drop('y', axis = 1)   # her we are droping the y feature as this is the target and 'X' is input features, the changes are not 
                                              # made inplace as we have not used 'inplace = True'

y = bank_marketing_data['y']             # Output/Dependent variable

Splitting the data into Train and Validation Set

We want to check the performance of the model that we built. For this purpose, we always split (both input and output data) the given data into training set which will be used to train the model, and test set which will be used to check how accurately the model is predicting outcomes.

For this purpose we have a class called ‘train_test_split’ in the ‘sklearn.model_selection’ module.

# import train_test_split
from sklearn.model_selection import train_test_split
# split the data
X_train, X_val, y_train, y_val = train_test_split(X,y,test_size=0.3, random_state = 42)

# X_train: independent/input feature data for training the model
# y_train: dependent/output feature data for training the model
# X_test: independent/input feature data for testing the model; will be used to predict the output values
# y_test: original dependent/output values of X_test; We will compare this values with our predicted values to check the performance of our built model.
 
# test_size = 0.30: 30% of the data will go for test set and 70% of the data will go for train set
# random_state = 42: this will fix the split i.e. there will be same split for each time you run the co

Building Model

Now we are finally ready, and we can train the model.

There are tons of Machine Learning models like Linear Regression, Random Forest, Decision Tree, etc. to say you some. However here we are using Random Forest Regressor (again, using the sklearn library).

Then we would feed the model both with the data (X_train) and the answers for that data (y_train)

# Importing RandomForestClassifier from sklearn.ensemble
# We will be further discussing about why Random Forest is in ensemble module of sklearn library
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier()

Train the model

rfc.fit(X_train, y_train)

Validate The Model

Wonder🤔 how well your model learned! Lets check it.

Predict on the validation data (X_val)

Now we predict using our trained model on the validation set we created i.e. X_val and evaluate our model on unforeseen data.

pred = rfc.predict(X_val)

Model Evaluation

Evaluating performance of the machine learning model that we have built is an essential part of any machine learning project. Performance of our model is done using some evaluation metrics.

There are so many evaluation metrics to use for regression problem, naming some - Accuracy Score, F1 Score, Precision, Recall etc. However, F1 Score is the metric for this challenge.

# import mean squared error from sklearn.metric
from sklearn.metrics import f1_score
print('F1 Score is: ', np.sqrt(f1_score(y_val, pred))) 

# y_val is the original target value of the validation set (X_val)
# pred is the predicted target value of the validation set

Predict The Output For Testing Dataset :sweat_smile:

We have trained our model, evaluated it and now finally we will predict the output/target for the testing data (i.e. testing_set_label.csv) given in ‘Data’ section of the problem page.

Load Test Set

Load the test data on which final submission is to be made.

test_data = pd.read_csv('https://raw.githubusercontent.com/dphi-official/Datasets/master/bank_marketing_data/testing_set_label.csv)

Note:

  • Use the same techniques to deal with missing values as done with the training dataset.

  • Don’t remove any observation/record from the test dataset otherwise you will get wrong answer. The number of items in your prediction should be same as the number of records are present in the test dataset.

  • Use the same techniques to preprocess the data as done with training dataset.

Why do we need to do the same procedure of filling missing values, data cleaning and data preprocessing on the new test data as it was done for the training and validation data?

Ans: Because our model has been trained on certain format of data and if we don’t provide the testing data of the similar format, the model will give erroneous predictions and the rmse of the model will increase. Also, if the model was build on ‘n’ number of features, while predicting on new test data you should always give the same number of features to the model. In this case if you provide different number of features while predicting the output, your ML model will throw a ValueError saying something like ‘number of features given x; expecting n’. Not confident about these statements? Well, as a data scientist you should always perform some experiment and observe the results.

Make Prediction on Test Dataset

Time to make submission!!!

target = rfc.predict(test_data)

Note: Follow the submission guidelines given in ‘How To Submit’ Section.

How to save prediciton results locally via jupyter notebook?

If you are working on Jupyter notebook, execute below block of codes. A file named ‘prediction_results.csv’ will be created in your current working directory.

res = pd.DataFrame(target) #target is nothing but the final predictions of your model on input features of your new unseen test data
res.index = test_data.index # its important for comparison. Here "test_new" is your new test dataset
res.columns = ["prediction"]
res.to_csv("submission.csv")      # the csv file will be saved locally on the same location where this notebook is located.

OR,

if you are working on Google Colab then use the below set of code to save prediction results locally

How to save prediction results locally via colab notebook?

If you are working on Google Colab Notebook, execute below block of codes. A file named ‘prediction_results’ will be downloaded in your system.

# To create Dataframe of predicted value with particular respective index
res = pd.DataFrame(target) # target are nothing but the final predictions of your model on input features of your new unseen test data
res.index = test_data.index # its important for comparison. Here "test_new" is your new test dataset
res.columns = ["prediction"]

# To download the csv file locally
from google.colab import files
res.to_csv('submission.csv')         
files.download('submission.csv')


Well Done! :+1:

You are all set to make a submission. Let’s head to the solve page to make the submission.