Presentation on ‘Machine Learning in plain English – Part 1’


This is the first part on my series ‘Machine Learning in plain English – Part 1’ in which I discuss the intuition behind different Machine Learning algorithms, metrics and the approaches etc. These presentations will not include tiresome math or laborious programming constructs, and will instead focus on just the concepts behind the Machine Learning algorithms.  This presentation discusses what Machine Learning is, Gradient Descent, linear, multi variate & polynomial regression, bias/variance, under fit, good fit and over fit and finally logistic regression etc.

It is hoped that these presentations will trigger sufficient interest in you, to explore this fascinating field further

To see actual implementations of the most widely used Machine Learning algorithms in R and Python, check out My book ‘Practical Machine Learning with R and Python’ on Amazon

Also see
1. Practical Machine Learning with R and Python – Part 3
2.R vs Python: Different similarities and similar differences
3. Perils and pitfalls of Big Data
4. Deep Learning from first principles in Python, R and Octave – Part 2
5. Getting started with memcached-libmemcached

To see all post see “Index of posts

Advertisements

Deep Learning from first principles in Python, R and Octave – Part 4


In this 4th post of my series on Deep Learning from first principles in Python, R and Octave – Part 4, I explore the details of creating a multi-class classifier using the Softmax activation unit in a neural network. The earlier posts in this series were

1. Deep Learning from first principles in Python, R and Octave – Part 1. In this post I implemented logistic regression as a simple Neural Network in vectorized Python, R and Octave
2. Deep Learning from first principles in Python, R and Octave – Part 2. This 2nd part implemented the most elementary neural network with 1 hidden layer and any number of activation units in the hidden layer with sigmoid activation at the output layer
3. Deep Learning from first principles in Python, R and Octave – Part 3. The 3rd implemented a multi-layer Deep Learning network with an arbitrary number if hidden layers and activation units per hidden layer. The output layer was for binary classification which was based on the sigmoid unit. This multi-layer deep network was implemented in vectorized Python, R and Octave

This 4th part takes a swing at multi-class classification and uses the Softmax as the activation unit in the output layer. Inclusion of the Softmax activation unit in the activation layer requires us to compute the derivative of Softmax, or rather the “Jacobian” of the Softmax function, besides also computing the log loss for this Softmax activation during back propagation. Since the derivation of the Jacobian of a Softmax and the computation of the Cross Entropy/log loss is very involved, I have implemented a basic neural network with just 1 hidden layer with the Softmax activation at the output layer. I also perform multi-class classification based on the ‘spiral’ data set from CS231n Convolutional Neural Networks Stanford course, to test the performance and correctness of the implementations in Python, R and Octave. You can clone download the code for the Python, R and Octave implementations from Github at Deep Learning – Part 4

The Softmax function takes an N dimensional vector as input and generates a N dimensional vector as output.
The Softmax function is given by
S_{j}= \frac{e_{j}}{\sum_{i}^{N}e_{k}}
There is a probabilistic interpretation of the Softmax, since the sum of the Softmax values of a set of vectors will always add up to 1, given that each Softmax value is divided by the total of all values.

As mentioned earlier, the Softmax takes a vector input and returns a vector of outputs.  For e.g. the Softmax of a vector a=[1, 3, 6]  is another vector S=[0.0063,0.0471,0.9464]. Notice that vector output is proportional to the input vector.  Also, taking the derivative of a vector by another vector, is known as the Jacobian. By the way, The Matrix Calculus You Need For Deep Learning by Terence Parr and Jeremy Howard, is very good paper that distills all the main mathematical concepts for Deep Learning in one place.

Let us take a simple 2 layered neural network with just 2 activation units in the hidden layer is shown below

Z_{1}^{1} =W_{11}^{1}x_{1} + W_{21}^{1}x_{2} + b_{1}^{1}
Z_{2}^{1} =W_{12}^{1}x_{1} + W_{22}^{1}x_{2} + b_{2}^{1}
and
A_{1}^{1} = g'(Z_{1}^{1})
A_{2}^{1} = g'(Z_{2}^{1})
where g'() is the activation unit in the hidden layer which can be a relu, sigmoid or a
tanh function

Note: The superscript denotes the layer. The above denotes the equation for layer 1
of the neural network. For layer 2 with the Softmax activation, the equations are
Z_{1}^{2} =W_{11}^{2}x_{1} + W_{21}^{2}x_{2} + b_{1}^{2}
Z_{2}^{2} =W_{12}^{2}x_{1} + W_{22}^{2}x_{2} + b_{2}^{2}
and
A_{1}^{2} = S(A_{1}^{1})
A_{2}^{2} = S(A_{2}^{1})
where S() is the Softmax activation function
S=\begin{pmatrix}  S(Z_{1}^{2})\\ S(Z_{2}^{2})  \end{pmatrix}
S=\begin{pmatrix}  \frac{e^{Z1}}{e^{Z1}+e^{Z2}}\\ \frac{e^{Z2}}{e^{Z1}+e^{Z2}}  \end{pmatrix}

The Jacobian of the softmax ‘S’ is given by
\begin{pmatrix}  \frac {\partial S_{1}}{\partial Z_{1}} & \frac {\partial S_{1}}{\partial Z_{2}}\\  \frac {\partial S_{2}}{\partial Z_{1}} & \frac {\partial S_{2}}{\partial Z_{2}}  \end{pmatrix}
\begin{pmatrix}  \frac{\partial}{\partial Z_{1}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}}\\  \frac{\partial}{\partial Z_{1}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}}  \end{pmatrix}     – (A)

Now the ‘division-rule’  of derivatives is as follows. If u and v are functions of x, then
\frac{d}{dx} \frac {u}{v} =\frac {vdu -udv}{v^{2}}
Using this to compute each element of the above Jacobian matrix, we see that
when i=j we have
\frac {\partial}{\partial Z1}\frac{e^{Z1}}{e^{Z1}+e^{Z2}} =  \frac {\sum e^{Z1} - e^{Z1^{2}}}{\sum ^{2}}
and when i \neq j
\frac {\partial}{\partial Z1}\frac{e^{Z2}}{e^{Z1}+e^{Z2}} =  \frac {0 - e^{z1}e^{Z2}}{\sum ^{2}}
This is of the general form
\frac {\partial S_{j}}{\partial z_{i}} = S_{i}( 1-S_{j})  when i=j
and
\frac {\partial S_{j}}{\partial z_{i}} = -S_{i}S_{j}  when i \neq j
Note: Since the Softmax essentially gives the probability the following
notation is also used
\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j}) when i=j
and
\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j} when i \neq j
If you throw the “Kronecker delta” into the equation, then the above equations can be expressed even more concisely as
\frac {\partial p_{j}}{\partial z_{i}} = p_{i} (\delta_{ij} - p_{j})
where \delta_{ij} = 1 when i=j and 0 when i \neq j

This reduces the Jacobian of the simple 2 output softmax vectors  equation (A) as
\begin{pmatrix}  p_{1}(1-p_{1}) & -p_{1}p_{2} \\  -p_{2}p_{1} & p_{2}(1-p_{2})  \end{pmatrix}
The loss of Softmax is given by
L = -\sum y_{i} log(p_{i})
For the 2 valued Softmax output this is
\frac {dL}{dp1} = -\frac {y_{1}}{p_{1}}
\frac {dL}{dp2} = -\frac {y_{2}}{p_{2}}
Using the chain rule we can write
\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \frac {\partial p_{i}}{\partial w_{pq}} (1)
and
\frac {\partial p_{i}}{\partial w_{pq}} = \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}} (2)
In expanded form this is
\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}}
Also
\frac {\partial L}{\partial Z_{i}} =\sum _{i} \frac {\partial L}{\partial p} \frac {\partial p}{\partial Z_{i}}
Therefore
\frac {\partial L}{\partial Z_{1}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{1}}
\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1})
Since
\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j}) when i=j
and
\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j} when i \neq j
which simplifies to
\frac {\partial L}{\partial Z_{1}} = -y_{1} + y_{1}p_{1} + y_{2}p_{1} =
p_{1}\sum (y_{1} + y_2) - y_{1}
\frac {\partial L}{\partial Z_{1}}= p_{1} - y_{1}
Since
\sum_{i} y_{i} =1
Similarly
\frac {\partial L}{\partial Z_{2}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{2}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{2}}
\frac {\partial L}{\partial z_{2}}=-\frac {y1}{p1}*(p_{1}p_{2}) - \frac {y2}{p2}*p_{2}(1-p_{2})
y_{1}p_{2} + y_{2}p_{2} - y_{2}
\frac {\partial L}{\partial Z_{2}} =p_{2}\sum (y_{1} + y_2) - y_{2}\\ = p_{2} - y_{2}
In general this is of the form
\frac {\partial L}{\partial z_{i}} = p_{i} -y_{i}
For e.g if the probabilities computed were p=[0.1, 0.7, 0.2] then this implies that the class with probability 0.7 is the likely class. This would imply that the ‘One hot encoding’ for  yi  would be yi=[0,1,0] therefore the gradient pi-yi = [0.1,-0.3,0.2]

<strong>Note: Further, we could extend this derivation for a Softmax activation output that outputs 3 classes
S=\begin{pmatrix}  \frac{e^{z1}}{e^{z1}+e^{z2}+e^{z3}}\\ \frac{e^{z2}}{e^{z1}+e^{z2}+e^{z3}} \\ \frac{e^{z3}}{e^{z1}+e^{z2}+e^{z3}}  \end{pmatrix}

We could derive
\frac {\partial L}{\partial z1}= \frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial z_{1}} +\frac {\partial L}{\partial p_{3}} \frac {\partial p_{3}}{\partial z_{1}} which similarly reduces to
\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1}) - \frac {y3}{p3}*(-p_{3}p_{1})
-y_{1}+ y_{1}p_{1} + y_{2}p_{1} + y_{3}p1 =  p_{1}\sum (y_{1} + y_2 + y_3) - y_{1}  = p_{1} - y_{1}
Interestingly, despite the lengthy derivations the final result is simple and intuitive!

As seen in my post ‘Deep Learning from first principles with Python, R and Octave – Part 3 the key equations for forward and backward propagation are

Forward propagation equations layer 1
Z_{1} = W_{1}X +b_{1}     and  A_{1} = g(Z_{1})
Forward propagation equations layer 1
Z_{2} = W_{2}A_{1} +b_{2}  and  A_{2} = S(Z_{2})

Backward propagation equations layer 2
\partial L/\partial W_{2} =\partial L/\partial Z_{2} *A_{1}
\partial L/\partial b_{2} =\partial L/\partial Z_{2}
\partial L/\partial A_{1} = \partial L/\partial Z_{2} * W_{2}
Backward propagation equations layer 1
\partial L/\partial W_{1} =\partial L/\partial Z_{1} *A_{0}
\partial L/\partial b_{1} =\partial L/\partial Z_{2}

2.0 Spiral data set

As I mentioned earlier, I will be using the ‘spiral’ data from CS231n Convolutional Neural Networks to ensure that my vectorized implementations in Python, R and Octave are correct. Here is the ‘spiral’ data set.

import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir("C:/junk/dl-4/dl-4")
exec(open("././DLfunctions41.py").read())

# Create an input data set - Taken from CS231n Convolutional Neural networks
# http://cs231n.github.io/neural-networks-case-study/
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j
# Plot the data
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.savefig("fig1.png", bbox_inches='tight')


The implementations of the vectorized Python, R and Octave code are shown diagrammatically below

2.1 Multi-class classification with Softmax – Python code

A simple 2 layer Neural network with a single hidden layer , with 100 Relu activation units in the hidden layer and the Softmax activation unit in the output layer is used for multi-class classification. This Deep Learning Network, plots the non-linear boundary of the 3 classes as shown below

import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir("C:/junk/dl-4/dl-4")
exec(open("././DLfunctions41.py").read())

# Read the input data
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j
  
# Set the number of features, hidden units in hidden layer and number of classess
numHidden=100 # No of hidden units in hidden layer
numFeats= 2 # dimensionality
numOutput = 3 # number of classes

# Initialize the model
parameters=initializeModel(numFeats,numHidden,numOutput)
W1= parameters['W1']
b1= parameters['b1']
W2= parameters['W2']
b2= parameters['b2']

# Set the learning rate
learningRate=0.6 

# Initialize losses
losses=[]
# Perform Gradient descent
for i in range(10000):
    # Forward propagation through hidden layer with Relu units
    A1,cache1= layerActivationForward(X.T,W1,b1,'relu')
    
    # Forward propagation through output layer with Softmax
    A2,cache2 = layerActivationForward(A1,W2,b2,'softmax')
    
    # No of training examples
    numTraining = X.shape[0]
    # Compute log probs. Take the log prob of correct class based on output y
    correct_logprobs = -np.log(A2[range(numTraining),y])
    # Conpute loss
    loss = np.sum(correct_logprobs)/numTraining
    
    # Print the loss
    if i % 1000 == 0:
        print("iteration %d: loss %f" % (i, loss))
        losses.append(loss)

    dA=0

    # Backward  propagation through output layer with Softmax
    dA1,dW2,db2 = layerActivationBackward(dA, cache2, y, activationFunc='softmax')
    # Backward  propagation through hidden layer with Relu unit
    dA0,dW1,db1 = layerActivationBackward(dA1.T, cache1, y, activationFunc='relu')
    
    #Update paramaters with the learning rate
    W1 += -learningRate * dW1
    b1 += -learningRate * db1
    W2 += -learningRate * dW2.T
    b2 += -learningRate * db2.T

#Plot losses vs iterations  
i=np.arange(0,10000,1000)
plt.plot(i,losses)

plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.title('Losses vs Iterations')
plt.savefig("fig2.png", bbox="tight")

#Compute the multi-class Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

# We need to determine the predicted values from the learnt data
# Forward propagation through hidden layer with Relu units
A1,cache1= layerActivationForward(X.T,W1,b1,'relu')
    
# Forward propagation through output layer with Softmax
A2,cache2 = layerActivationForward(A1,W2,b2,'softmax')
#Compute predicted values from weights and biases
yhat=np.argmax(A2, axis=1)

a=confusion_matrix(y.T,yhat.T)
print("Multi-class Confusion Matrix")
print(a)
## iteration 0: loss 1.098507
## iteration 1000: loss 0.214611
## iteration 2000: loss 0.043622
## iteration 3000: loss 0.032525
## iteration 4000: loss 0.025108
## iteration 5000: loss 0.021365
## iteration 6000: loss 0.019046
## iteration 7000: loss 0.017475
## iteration 8000: loss 0.016359
## iteration 9000: loss 0.015703
## Multi-class Confusion Matrix
## [[ 99   1   0]
##  [  0 100   0]
##  [  0   1  99]]

To know more details about Confusion Matrix and other related measures check out my book  My book ‘Practical Machine Learning with R and Python’ on Amazon

2.2 Multi-class classification with Softmax – R code

The spiral data set created with Python was saved, and is used as the input with R code. The R Neural Network seems to perform much,much slower than both Python and Octave. Not sure why! Incidentally the computation of loss and the softmax derivative are identical for both R and Octave. yet R is much slower. To compute the softmax derivative I create matrices for the One Hot Encoded yi and then stack them before subtracting pi-yi. I am sure there is a more elegant and more efficient way to do this, much like Python. Any suggestions?

library(ggplot2)
library(dplyr)
library(RColorBrewer)
source("DLfunctions41.R")
# Read the spiral dataset
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 
Z1=data.frame(Z)
#Plot the dataset
ggplot(Z1,aes(x=V1,y=V2,col=V3)) +geom_point() + 
  scale_colour_gradientn(colours = brewer.pal(10, "Spectral"))

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X1 <- t(X)
Y1 <- t(y)

# Initialize number of features, number of hidden units in hidden layer and
# number of classes
numFeats<-2 # No features
numHidden<-100 # No of hidden units
numOutput<-3 # No of classes

# Initialize model
parameters <-initializeModel(numFeats, numHidden,numOutput)

W1 <-parameters[['W1']]
b1 <-parameters[['b1']]
W2 <-parameters[['W2']]
b2 <-parameters[['b2']]

# Set the learning rate
learningRate <- 0.5
# Initialize losses
losses <- NULL
# Perform gradient descent
for(i in 0:9000){

# Forward propagation through hidden layer with Relu units
retvals <- layerActivationForward(X1,W1,b1,'relu')
A1 <- retvals[['A']]
cache1 <- retvals[['cache']]
forward_cache1 <- cache1[['forward_cache1']]
activation_cache <- cache1[['activation_cache']]

# Forward propagation through output layer with Softmax units
retvals = layerActivationForward(A1,W2,b2,'softmax')
A2 <- retvals[['A']]
cache2 <- retvals[['cache']]
forward_cache2 <- cache2[['forward_cache1']]
activation_cache2 <- cache2[['activation_cache']]

# No oftraining examples
numTraining <- dim(X)[1]
dA <-0

# Select the elements where the y values are 0, 1 or 2 and make a vector
a=c(A2[y==0,1],A2[y==1,2],A2[y==2,3])
# Take log
correct_probs = -log(a)
# Compute loss
loss= sum(correct_probs)/numTraining

if(i %% 1000 == 0){
sprintf("iteration %d: loss %f",i, loss)
print(loss)
}
# Backward propagation through output layer with Softmax units
retvals = layerActivationBackward(dA, cache2, y, activationFunc='softmax')
dA1 = retvals[['dA_prev']]
dW2= retvals[['dW']]
db2= retvals[['db']]
# Backward propagation through hidden layer with Relu units
retvals = layerActivationBackward(t(dA1), cache1, y, activationFunc='relu')
dA0 = retvals[['dA_prev']]
dW1= retvals[['dW']]
db1= retvals[['db']]

# Update parameters
W1 <- W1 - learningRate * dW1
b1 <- b1 - learningRate * db1
W2 <- W2 - learningRate * t(dW2)
b2 <- b2 - learningRate * t(db2)
}
## [1] 1.212487
## [1] 0.5740867
## [1] 0.4048824
## [1] 0.3561941
## [1] 0.2509576
## [1] 0.7351063
## [1] 0.2066114
## [1] 0.2065875
## [1] 0.2151943
## [1] 0.1318807

 

#Create iterations
iterations <- seq(0,10)
#df=data.frame(iterations,losses)
ggplot(df,aes(x=iterations,y=losses)) + geom_point() + geom_line(color="blue") +
    ggtitle("Losses vs iterations") + xlab("Iterations") + ylab("Loss")

plotDecisionBoundary(Z,W1,b1,W2,b2)



Multi-class Confusion Matrix

library(caret)
library(e1071)

# Forward propagation through hidden layer with Relu units
retvals <- layerActivationForward(X1,W1,b1,'relu')
A1 <- retvals[['A']]

# Forward propagation through output layer with Softmax units
retvals = layerActivationForward(A1,W2,b2,'softmax')
A2 <- retvals[['A']]
yhat <- apply(A2, 1,which.max) -1
Confusion Matrix and Statistics
          Reference
Prediction  0  1  2
         0 97  0  1
         1  2 96  4
         2  1  4 95

Overall Statistics                                        
               Accuracy : 0.96            
                 95% CI : (0.9312, 0.9792)
    No Information Rate : 0.3333          
    P-Value [Acc > NIR] : <2e-16          
                                          
                  Kappa : 0.94            
 Mcnemar's Test P-Value : 0.5724          
Statistics by Class:

                     Class: 0 Class: 1 Class: 2
Sensitivity            0.9700   0.9600   0.9500
Specificity            0.9950   0.9700   0.9750
Pos Pred Value         0.9898   0.9412   0.9500
Neg Pred Value         0.9851   0.9798   0.9750
Prevalence             0.3333   0.3333   0.3333
Detection Rate         0.3233   0.3200   0.3167
Detection Prevalence   0.3267   0.3400   0.3333
Balanced Accuracy      0.9825   0.9650   0.9625

My book “Practical Machine Learning with R and Python” includes the implementation for many Machine Learning algorithms and associated metrics. Pick up your copy today!

2.3 Multi-class classification with Softmax – Octave code

A 2 layer Neural network with the Softmax activation unit in the output layer is constructed in Octave. The same spiral data set is used for Octave also

source("DL41functions.m")
# Read the spiral data
data=csvread("spiral.csv");
# Setup the data
X=data(:,1:2);
Y=data(:,3);
# Set the number of features, number of hidden units in hidden layer and number of classes
numFeats=2; #No features
numHidden=100; # No of hidden units
numOutput=3; # No of classes
# Initialize model
[W1 b1 W2 b2] = initializeModel(numFeats,numHidden,numOutput);
# Initialize losses
losses=[]
#Initialize learningRate
learningRate=0.5;
for k =1:10000
# Forward propagation through hidden layer with Relu units
[A1,cache1 activation_cache1]= layerActivationForward(X',W1,b1,activationFunc ='relu');
# Forward propagation through output layer with Softmax units
[A2,cache2 activation_cache2] =
layerActivationForward(A1,W2,b2,activationFunc='softmax');
# No of training examples
numTraining = size(X)(1);
# Select rows where Y=0,1,and 2 and concatenate to a long vector
a=[A2(Y==0,1) ;A2(Y==1,2) ;A2(Y==2,3)];
#Select the correct column for log prob
correct_probs = -log(a);
#Compute log loss
loss= sum(correct_probs)/numTraining;
if(mod(k,1000) == 0)
disp(loss);
losses=[losses loss];
endif
dA=0;
# Backward propagation through output layer with Softmax units
[dA1 dW2 db2] = layerActivationBackward(dA, cache2, activation_cache2,Y,activationFunc='softmax');
# Backward propagation through hidden layer with Relu units
[dA0,dW1,db1] = layerActivationBackward(dA1', cache1, activation_cache1, Y, activationFunc='relu');
#Update parameters
W1 += -learningRate * dW1;
b1 += -learningRate * db1;
W2 += -learningRate * dW2';
b2 += -learningRate * db2';
endfor
# Plot Losses vs Iterations
iterations=0:1000:9000
plotCostVsIterations(iterations,losses)
# Plot the decision boundary
plotDecisionBoundary( X,Y,W1,b1,W2,b2)

The code for the Python, R and Octave implementations can be downloaded from Github at Deep Learning – Part 4

Conclusion

In this post I have implemented a 2 layer Neural Network with the Softmax classifier. In Part 3, I implemented a multi-layer Deep Learning Network. I intend to include the Softmax activation unit into the generalized multi-layer Deep Network along with the other activation units of sigmoid,tanh and relu.

Stick around, I’ll be back!!
Watch this space!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. CS231 Convolutional Neural Networks for Visual Recognition
4. Eli Bendersky’s Website – The Softmax function and its derivative
5. Cross Validated – Backpropagation with Softmax / Cross Entropy
6. Stackoverflow – CS231n: How to calculate gradient for Softmax loss function?
7. Math Stack Exchange – Derivative of Softmax
8. The Matrix Calculus for Deep Learning

You may like
1.My book ‘Practical Machine Learning with R and Python’ on Amazon
2. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
3. Deblurring with OpenCV: Weiner filter reloaded
4. A method to crowd source pothole marking on (Indian) roads
5. Rock N’ Roll with Bluemix, Cloudant & NodeExpress
6. Sea shells on the seashore
7. Design Principles of Scalable, Distributed Systems

To see all post click Index of posts

Deep Learning from first principles in Python, R and Octave – Part 1


“You don’t perceive objects as they are. You perceive them as you are.”
“Your interpretation of physical objects has everything to do with the historical trajectory of your brain – and little to do with the objects themselves.”
“The brain generates its own reality, even before it receives information coming in from the eyes and the other senses. This is known as the internal model”

                          David Eagleman - The Brain: The Story of You

This is the first in the series of posts, I intend to write on Deep Learning. This post is inspired by the Deep Learning Specialization by Prof Andrew Ng on Coursera and Neural Networks for Machine Learning by Prof Geoffrey Hinton also on Coursera. In this post I implement Logistic regression with a 2 layer Neural Network i.e. a Neural Network that just has an input layer and an output layer and with no hidden layer.I am certain that any self-respecting Deep Learning/Neural Network would consider a Neural Network without hidden layers as no Neural Network at all!

This 2 layer network is implemented in Python, R and Octave languages. I have included Octave, into the mix, as Octave is a close cousin of Matlab. These implementations in Python, R and Octave are equivalent vectorized implementations. So, if you are familiar in any one of the languages, you should be able to look at the corresponding code in the other two. You can download this R Markdown file and Octave code from DeepLearning -Part 1

To start with, Logistic Regression is performed using sklearn’s logistic regression package for the cancer data set also from sklearn. This is shown below

1. Logistic Regression

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification, make_blobs

from sklearn.metrics import confusion_matrix
from matplotlib.colors import ListedColormap
from sklearn.datasets import load_breast_cancer
# Load the cancer data
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
                                                   random_state = 0)
# Call the Logisitic Regression function
clf = LogisticRegression().fit(X_train, y_train)
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
     .format(clf.score(X_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
     .format(clf.score(X_test, y_test)))
## Accuracy of Logistic regression classifier on training set: 0.96
## Accuracy of Logistic regression classifier on test set: 0.96

To check on other classification algorithms, check my post Practical Machine Learning with R and Python – Part 2. You could also take a look at my book, available on Amazon , in which I implement the most popular Machine Learning algorithms in both R and Python – My book ‘Practical Machine Learning with R and Python’ on Amazon

2. Logistic Regression as a 2 layer Neural Network

In the following section Logistic Regression is implemented as a 2 layer Neural Network in Python, R and Octave. The same cancer data set from sklearn will be used to train and test the Neural Network in Python, R and Octave. This can be represented diagrammatically as below

 

The cancer data set has 30 input features, and the target variable ‘output’ is either 0 or 1. Hence the sigmoid activation function will be used in the output layer for classification.

This simple 2 layer Neural Network is shown below
At the input layer there are 30 features and the corresponding weights of these inputs which are initialized to small random values.
Z= w_{1}x_{1} +w_{2}x_{2} +..+ w_{30}x_{30} + b
where ‘b’ is the bias term

The Activation function is the sigmoid function which is a= 1/(1+e^{-z})
The Loss, when the sigmoid function is used in the output layer, is given by
L=-(ylog(a) + (1-y)log(1-a)) (1)

Gradient Descent

Forward propagation

In forward propagation cycle of the Neural Network the output Z and the output of activation function, the sigmoid function, is first computed. Then using the output ‘y’ for the given features, the ‘Loss’ is computed using equation (1) above.

Backward propagation

The backward propagation cycle determines how the ‘Loss’ is impacted for small variations from the previous layers upto the input layer. In other words, backward propagation computes the changes in the weights at the input layer, which will minimize the loss. Several cycles of gradient descent are performed in the path of steepest descent to find the local minima. In other words the set of weights and biases, at the input layer, which will result in the lowest loss is computed by gradient descent. The weights at the input layer are decreased by a parameter known as the ‘learning rate’. Too big a ‘learning rate’ can overshoot the local minima, and too small a ‘learning rate’ can take a long time to reach the local minima. This is done for ‘m’ training examples.

Chain rule of differentiation
Let y=f(u)
and u=g(x) then
\partial y/\partial x = \partial y/\partial u * \partial u/\partial x

Derivative of sigmoid
\sigma=1/(1+e^{-z})
Let x= 1 + e^{-z}  then
\sigma = 1/x
\partial \sigma/\partial x = -1/x^{2}
\partial x/\partial z = -e^{-z}
Using the chain rule of differentiation we get
\partial \sigma/\partial z = \partial \sigma/\partial x * \partial x/\partial z
=-1/(1+e^{-z})^{2}* -e^{-z} = e^{-z}/(1+e^{-z})^{2}
Therefore \partial \sigma/\partial z = \sigma(1-\sigma)        -(2)

The 3 equations for the 2 layer Neural Network representation of Logistic Regression are
L=-(y*log(a) + (1-y)*log(1-a))      -(a)
a=1/(1+e^{-z})      -(b)
Z= w_{1}x_{1} +w_{2}x_{2} +...+ w_{30}x_{30} +b   -(c)

The back propagation step requires the computation of dL/dw_{i} and dL/db_{i}. In the case of regression it would be dE/dw_{i} and dE/db_{i} where dE is the Mean Squared Error function.
Computing the derivatives for back propagation we have
dL/da = -(y/a + (1-y)/(1-a))          -(d)
because d/dx(logx) = 1/x
Also from equation (2) we get
da/dZ = a (1-a)                                  – (e)
By chain rule
\partial L/\partial Z = \partial L/\partial a * \partial a/\partial Z
therefore substituting the results of (d) & (e) we get
\partial L/\partial Z = -(y/a + (1-y)/(1-a)) * a(1-a) = a-y         (f)
Finally
\partial L/\partial w_{i}= \partial L/\partial a * \partial a/\partial Z * \partial Z/\partial w_{i}                                                           -(g)
\partial Z/\partial w_{i} = x_{i}            – (h)
and from (f) we have  \partial L/\partial Z =a-y
Therefore  (g) reduces to
\partial L/\partial w_{i} = x_{i}* (a-y) -(i)
Also
\partial L/\partial b = \partial L/\partial a * \partial a/\partial Z * \partial Z/\partial b -(j)
Since
\partial Z/\partial b = 1
\partial L/\partial b = a-y

The gradient computes the weights at the input layer and the corresponding bias by using the values
of dw_{i} and db
w_{i} := w_{i} -\alpha * dw_{i}
b := b -\alpha * db
I found the computation graph representation in the book Deep Learning: Ian Goodfellow, Yoshua Bengio, Aaron Courville, very useful to visualize and also compute the backward propagation. For the 2 layer Neural Network of Logistic Regression the computation graph is shown below

3. Neural Network for Logistic Regression -Python code (vectorized)

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

# Define the sigmoid function
def sigmoid(z):  
    a=1/(1+np.exp(-z))    
    return a

# Initialize
def initialize(dim):
    w = np.zeros(dim).reshape(dim,1)
    b = 0   
    return w

# Compute the loss
def computeLoss(numTraining,Y,A):
    loss=-1/numTraining *np.sum(Y*np.log(A) + (1-Y)*(np.log(1-A)))
    return(loss)

# Execute the forward propagation
def forwardPropagation(w,b,X,Y):
    # Compute Z
    Z=np.dot(w.T,X)+b
    # Determine the number of training samples
    numTraining=float(len(X))
    # Compute the output of the sigmoid activation function 
    A=sigmoid(Z)
    #Compute the loss
    loss = computeLoss(numTraining,Y,A)
    # Compute the gradients dZ, dw and db
    dZ=A-Y
    dw=1/numTraining*np.dot(X,dZ.T)
    db=1/numTraining*np.sum(dZ)
    
    # Return the results as a dictionary
    gradients = {"dw": dw,
             "db": db}
    loss = np.squeeze(loss)
    return gradients,loss

# Compute Gradient Descent    
def gradientDescent(w, b, X, Y, numIerations, learningRate):
    losses=[]
    idx =[]
    # Iterate 
    for i in range(numIerations):
        gradients,loss=forwardPropagation(w,b,X,Y)
        #Get the derivates
        dw = gradients["dw"]
        db = gradients["db"]
        w = w-learningRate*dw
        b = b-learningRate*db

        # Store the loss
        if i % 100 == 0:
            idx.append(i)
            losses.append(loss)      
        # Set params and grads
        params = {"w": w,
                  "b": b}  
        grads = {"dw": dw,
                 "db": db}
    
    return params, grads, losses,idx

# Predict the output for a training set 
def predict(w,b,X):
    size=X.shape[1]
    yPredicted=np.zeros((1,size))
    Z=np.dot(w.T,X)
    # Compute the sigmoid
    A=sigmoid(Z)
    for i in range(A.shape[1]):
        #If the value is > 0.5 then set as 1
        if(A[0][i] > 0.5):
            yPredicted[0][i]=1
        else:
        # Else set as 0
            yPredicted[0][i]=0

    return yPredicted

#Normalize the data   
def normalize(x):
    x_norm = None
    x_norm = np.linalg.norm(x,axis=1,keepdims=True)
    x= x/x_norm
    return x

   
# Run the 2 layer Neural Network on the cancer data set

from sklearn.datasets import load_breast_cancer
# Load the cancer data
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
# Create train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
                                                   random_state = 0)
# Normalize the data for better performance
X_train1=normalize(X_train)


# Create weight vectors of zeros. The size is the number of features in the data set=30
w=np.zeros((X_train.shape[1],1))
#w=np.zeros((30,1))
b=0

#Normalize the training data so that gradient descent performs better
X_train1=normalize(X_train)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_train2=X_train1.T

# Reshape to remove the rank 1 array and then transpose
y_train1=y_train.reshape(len(y_train),1)
y_train2=y_train1.T

# Run gradient descent for 4000 times and compute the weights
parameters, grads, costs,idx = gradientDescent(w, b, X_train2, y_train2, numIerations=4000, learningRate=0.75)
w = parameters["w"]
b = parameters["b"]
   

# Normalize X_test
X_test1=normalize(X_test)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=X_test1.T

#Reshape y_test
y_test1=y_test.reshape(len(y_test),1)
y_test2=y_test1.T

# Predict the values for 
yPredictionTest = predict(w, b, X_test2)
yPredictionTrain = predict(w, b, X_train2)

# Print the accuracy
print("train accuracy: {} %".format(100 - np.mean(np.abs(yPredictionTrain - y_train2)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(yPredictionTest - y_test)) * 100))

# Plot the Costs vs the number of iterations
fig1=plt.plot(idx,costs)
fig1=plt.title("Gradient descent-Cost vs No of iterations")
fig1=plt.xlabel("No of iterations")
fig1=plt.ylabel("Cost")
fig1.figure.savefig("fig1", bbox_inches='tight')
## train accuracy: 90.3755868545 %
## test accuracy: 89.5104895105 %

Note: It can be seen that the Accuracy on the training and test set is 90.37% and 89.51%. This is comparatively poorer than the 96% which the logistic regression of sklearn achieves! But this is mainly because of the absence of hidden layers which is the real power of neural networks.

4. Neural Network for Logistic Regression -R code (vectorized)

source("RFunctions-1.R")
# Define the sigmoid function
sigmoid <- function(z){
    a <- 1/(1+ exp(-z))
    a
}

# Compute the loss
computeLoss <- function(numTraining,Y,A){
    loss <- -1/numTraining* sum(Y*log(A) + (1-Y)*log(1-A))
    return(loss)
}

# Compute forward propagation
forwardPropagation <- function(w,b,X,Y){
    # Compute Z
    Z <- t(w) %*% X +b
    #Set the number of samples
    numTraining <- ncol(X)
    # Compute the activation function
    A=sigmoid(Z) 
    
    #Compute the loss
    loss <- computeLoss(numTraining,Y,A)
    
    # Compute the gradients dZ, dw and db
    dZ<-A-Y
    dw<-1/numTraining * X %*% t(dZ)
    db<-1/numTraining*sum(dZ)
    
    fwdProp <- list("loss" = loss, "dw" = dw, "db" = db)
    return(fwdProp)
}

# Perform one cycle of Gradient descent
gradientDescent <- function(w, b, X, Y, numIerations, learningRate){
    losses <- NULL
    idx <- NULL
    # Loop through the number of iterations
    for(i in 1:numIerations){
        fwdProp <-forwardPropagation(w,b,X,Y)
        #Get the derivatives
        dw <- fwdProp$dw
        db <- fwdProp$db
        #Perform gradient descent
        w = w-learningRate*dw
        b = b-learningRate*db
        l <- fwdProp$loss
        # Stoe the loss
        if(i %% 100 == 0){
            idx <- c(idx,i)
            losses <- c(losses,l)  
        }
    }
    
    # Return the weights and losses
    gradDescnt <- list("w"=w,"b"=b,"dw"=dw,"db"=db,"losses"=losses,"idx"=idx)
   
    return(gradDescnt)
}

# Compute the predicted value for input
predict <- function(w,b,X){
    m=dim(X)[2]
    # Create a ector of 0's
    yPredicted=matrix(rep(0,m),nrow=1,ncol=m)
    Z <- t(w) %*% X +b
    # Compute sigmoid
    A=sigmoid(Z)
    for(i in 1:dim(A)[2]){
        # If A > 0.5 set value as 1
        if(A[1,i] > 0.5)
        yPredicted[1,i]=1
       else
        # Else set as 0
        yPredicted[1,i]=0
    }

    return(yPredicted)
}

# Normalize the matrix
normalize <- function(x){
    #Create the norm of the matrix.Perform the Frobenius norm of the matrix 
    n<-as.matrix(sqrt(rowSums(x^2)))
    #Sweep by rows by norm. Note '1' in the function which performing on every row
    normalized<-sweep(x, 1, n, FUN="/")
    return(normalized)
}

# Run the 2 layer Neural Network on the cancer data set
# Read the data (from sklearn)
cancer <- read.csv("cancer.csv")
# Rename the target variable
names(cancer) <- c(seq(1,30),"output")
# Split as training and test sets
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

# Set the features
X_train <-train[,1:30]
y_train <- train[,31]
X_test <- test[,1:30]
y_test <- test[,31]
# Create a matrix of 0's with the number of features
w <-matrix(rep(0,dim(X_train)[2]))
b <-0
X_train1 <- normalize(X_train)
X_train2=t(X_train1)

# Reshape  then transpose
y_train1=as.matrix(y_train)
y_train2=t(y_train1)

# Perform gradient descent
gradDescent= gradientDescent(w, b, X_train2, y_train2, numIerations=3000, learningRate=0.77)


# Normalize X_test
X_test1=normalize(X_test)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=t(X_test1)

#Reshape y_test and take transpose
y_test1=as.matrix(y_test)
y_test2=t(y_test1)

# Use the values of the weights generated from Gradient Descent
yPredictionTest = predict(gradDescent$w, gradDescent$b, X_test2)
yPredictionTrain = predict(gradDescent$w, gradDescent$b, X_train2)

sprintf("Train accuracy: %f",(100 - mean(abs(yPredictionTrain - y_train2)) * 100))
## [1] "Train accuracy: 90.845070"
sprintf("test accuracy: %f",(100 - mean(abs(yPredictionTest - y_test)) * 100))
## [1] "test accuracy: 87.323944"
df <-data.frame(gradDescent$idx, gradDescent$losses)
names(df) <- c("iterations","losses")
ggplot(df,aes(x=iterations,y=losses)) + geom_point() + geom_line(col="blue") +
    ggtitle("Gradient Descent - Losses vs No of Iterations") +
    xlab("No of iterations") + ylab("Losses")

4. Neural Network for Logistic Regression -Octave code (vectorized)


1;
# Define sigmoid function
function a = sigmoid(z)
a = 1 ./ (1+ exp(-z));
end
# Compute the loss
function loss=computeLoss(numtraining,Y,A)
loss = -1/numtraining * sum((Y .* log(A)) + (1-Y) .* log(1-A));
end


# Perform forward propagation
function [loss,dw,db,dZ] = forwardPropagation(w,b,X,Y)
% Compute Z
Z = w' * X + b;
numtraining = size(X)(1,2);
# Compute sigmoid
A = sigmoid(Z);


#Compute loss. Note this is element wise product
loss =computeLoss(numtraining,Y,A);
# Compute the gradients dZ, dw and db
dZ = A-Y;
dw = 1/numtraining* X * dZ';
db =1/numtraining*sum(dZ);

end

# Compute Gradient Descent
function [w,b,dw,db,losses,index]=gradientDescent(w, b, X, Y, numIerations, learningRate)
#Initialize losses and idx
losses=[];
index=[];
# Loop through the number of iterations
for i=1:numIerations,
[loss,dw,db,dZ] = forwardPropagation(w,b,X,Y);
# Perform Gradient descent
w = w - learningRate*dw;
b = b - learningRate*db;
if(mod(i,100) ==0)
# Append index and loss
index = [index i];
losses = [losses loss];
endif

end
end

# Determine the predicted value for dataset
function yPredicted = predict(w,b,X)
m = size(X)(1,2);
yPredicted=zeros(1,m);
# Compute Z
Z = w' * X + b;
# Compute sigmoid
A = sigmoid(Z);
for i=1:size(X)(1,2),
# Set predicted as 1 if A > 0,5
if(A(1,i) >= 0.5)
yPredicted(1,i)=1;
else
yPredicted(1,i)=0;
endif
end
end


# Normalize by dividing each value by the sum of squares
function normalized = normalize(x)
# Compute Frobenius norm. Square the elements, sum rows and then find square root
a = sqrt(sum(x .^ 2,2));
# Perform element wise division
normalized = x ./ a;
end


# Split into train and test sets
function [X_train,y_train,X_test,y_test] = trainTestSplit(dataset,trainPercent)
# Create a random index
ix = randperm(length(dataset));
# Split into training
trainSize = floor(trainPercent/100 * length(dataset));
train=dataset(ix(1:trainSize),:);
# And test
test=dataset(ix(trainSize+1:length(dataset)),:);
X_train = train(:,1:30);
y_train = train(:,31);
X_test = test(:,1:30);
y_test = test(:,31);
end


cancer=csvread("cancer.csv");
[X_train,y_train,X_test,y_test] = trainTestSplit(cancer,75);
w=zeros(size(X_train)(1,2),1);
b=0;
X_train1=normalize(X_train);
X_train2=X_train1';
y_train1=y_train';
[w1,b1,dw,db,losses,idx]=gradientDescent(w, b, X_train2, y_train1, numIerations=3000, learningRate=0.75);
# Normalize X_test
X_test1=normalize(X_test);
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=X_test1';
y_test1=y_test';
# Use the values of the weights generated from Gradient Descent
yPredictionTest = predict(w1, b1, X_test2);
yPredictionTrain = predict(w1, b1, X_train2);


trainAccuracy=100-mean(abs(yPredictionTrain - y_train1))*100
testAccuracy=100- mean(abs(yPredictionTest - y_test1))*100
trainAccuracy = 90.845
testAccuracy = 89.510
graphics_toolkit('gnuplot')
plot(idx,losses);
title ('Gradient descent- Cost vs No of iterations');
xlabel ("No of iterations");
ylabel ("Cost");

Conclusion
This post starts with a simple 2 layer Neural Network implementation of Logistic Regression. Clearly the performance of this simple Neural Network is comparatively poor to the highly optimized sklearn’s Logistic Regression. This is because the above neural network did not have any hidden layers. Deep Learning & Neural Networks achieve extraordinary performance because of the presence of deep hidden layers

The Deep Learning journey has begun… Don’t miss the bus!
Stay tuned for more interesting posts in Deep Learning!!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. Deep Learning, Ian Goodfellow, Yoshua Bengio and Aaron Courville
4. Neural Networks: The mechanics of backpropagation
5. Machine Learning

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Simplifying Machine Learning: Bias, Variance, regularization and odd facts – Part 4
3. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon
4. Practical Machine Learning with R and Python – Part 4
5. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
6. A Bluemix recipe with MongoDB and Node.js
7. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)

To see all posts check Index of posts

My book ‘Practical Machine Learning with R and Python’ on Amazon


Untitled

My book ‘Practical Machine Learning with R and Python – Machine Learning in stereo’ is now available in both paperback ($9.99) and kindle ($6.97/Rs449) versions. In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code. This is almost like listening to parallel channels of music in stereo!
1. Practical machine with R and Python – Machine Learning in Stereo (Paperback)
2. Practical machine with R and Python – Machine Learning in Stereo (Kindle)
This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better.

Here is a look at the topics covered

Table of Contents
Essential R …………………………………….. 7
Essential Python for Datascience ………………..   54
R vs Python ……………………………………. 77
Regression of a continuous variable ………………. 96
Classification and Cross Validation ……………….113
Regression techniques and regularization …………. 134
SVMs, Decision Trees and Validation curves …………175
Splines, GAMs, Random Forests and Boosting …………202
PCA, K-Means and Hierarchical Clustering …………. 234

Pick up your copy today!!
Hope you have a great time learning as I did while implementing these algorithms!

Practical Machine Learning with R and Python – Part 5


This is the 5th and probably penultimate part of my series on ‘Practical Machine Learning with R and Python’. The earlier parts of this series included

1. Practical Machine Learning with R and Python – Part 1 In this initial post, I touch upon univariate, multivariate, polynomial regression and KNN regression in R and Python
2.Practical Machine Learning with R and Python – Part 2 In this post, I discuss Logistic Regression, KNN classification and cross validation error for both LOOCV and K-Fold in both R and Python
3.Practical Machine Learning with R and Python – Part 3 This post covered ‘feature selection’ in Machine Learning. Specifically I touch best fit, forward fit, backward fit, ridge(L2 regularization) & lasso (L1 regularization). The post includes equivalent code in R and Python.
4.Practical Machine Learning with R and Python – Part 4 In this part I discussed SVMs, Decision Trees, validation, precision recall, and roc curves

This post ‘Practical Machine Learning with R and Python – Part 5’ discusses regression with B-splines, natural splines, smoothing splines, generalized additive models (GAMS), bagging, random forest and boosting

As with my previous posts in this series, this post is largely based on the following 2 MOOC courses

1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford
2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera

You can download this R Markdown file and associated data files from Github at MachineLearning-RandPython-Part5

The content of this post and much more is now available as a compact book  on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo

For this part I have used the data sets from UCI Machine Learning repository(Communities and Crime and Auto MPG)

1. Splines

When performing regression (continuous or logistic) between a target variable and a feature (or a set of features), a single polynomial for the entire range of the data set usually does not perform a good fit.Rather we would need to provide we could fit
regression curves for different section of the data set.

There are several techniques which do this for e.g. piecewise-constant functions, piecewise-linear functions, piecewise-quadratic/cubic/4th order polynomial functions etc. One such set of functions are the cubic splines which fit cubic polynomials to successive sections of the dataset. The points where the cubic splines join, are called ‘knots’.

Since each section has a different cubic spline, there could be discontinuities (or breaks) at these knots. To prevent these discontinuities ‘natural splines’ and ‘smoothing splines’ ensure that the seperate cubic functions have 2nd order continuity at these knots with the adjacent splines. 2nd order continuity implies that the value, 1st order derivative and 2nd order derivative at these knots are equal.

A cubic spline with knots \alpha_{k} , k=1,2,3,..K is a piece-wise cubic polynomial with continuous derivative up to order 2 at each knot. We can write y_{i} = \beta_{0} +\beta_{1}b_{1}(x_{i}) +\beta_{2}b_{2}(x_{i}) + .. + \beta_{K+3}b_{K+3}(x_{i}) + \epsilon_{i}.
For each (x{i},y{i}), b_{i} are called ‘basis’ functions, where  b_{1}(x_{i})=x_{i}b_{2}(x_{i})=x_{i}^2, b_{3}(x_{i})=x_{i}^3, b_{k+3}(x_{i})=(x_{i} -\alpha_{k})^3 where k=1,2,3… K The 1st and 2nd derivatives of cubic splines are continuous at the knots. Hence splines provide a smooth continuous fit to the data by fitting different splines to different sections of the data

1.1a Fit a 4th degree polynomial – R code

In the code below a non-linear function (a 4th order polynomial) is used to fit the data. Usually when we fit a single polynomial to the entire data set the tails of the fit tend to vary a lot particularly if there are fewer points at the ends. Splines help in reducing this variation at the extremities

library(dplyr)
library(ggplot2)
source('RFunctions-1.R')
# Read the data
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
#Select specific columns
df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
auto <- df2[complete.cases(df2),]
# Fit a 4th degree polynomial
fit=lm(mpg~poly(horsepower,4),data=auto)
#Display a summary of fit
summary(fit)
## 
## Call:
## lm(formula = mpg ~ poly(horsepower, 4), data = auto)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -14.8820  -2.5802  -0.1682   2.2100  16.1434 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            23.4459     0.2209 106.161   <2e-16 ***
## poly(horsepower, 4)1 -120.1377     4.3727 -27.475   <2e-16 ***
## poly(horsepower, 4)2   44.0895     4.3727  10.083   <2e-16 ***
## poly(horsepower, 4)3   -3.9488     4.3727  -0.903    0.367    
## poly(horsepower, 4)4   -5.1878     4.3727  -1.186    0.236    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 4.373 on 387 degrees of freedom
## Multiple R-squared:  0.6893, Adjusted R-squared:  0.6861 
## F-statistic: 214.7 on 4 and 387 DF,  p-value: < 2.2e-16
#Get the range of horsepower
hp <- range(auto$horsepower)
#Create a sequence to be used for plotting
hpGrid <- seq(hp[1],hp[2],by=10)
#Predict for these values of horsepower. Set Standard error as TRUE
pred=predict(fit,newdata=list(horsepower=hpGrid),se=TRUE)
#Compute bands on either side that is 2xSE
seBands=cbind(pred$fit+2*pred$se.fit,pred$fit-2*pred$se.fit)
#Plot the fit with Standard Error bands
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
     ylab="MPG", main="Polynomial of degree 4")
lines(hpGrid,pred$fit,lwd=2,col="blue")
matlines(hpGrid,seBands,lwd=2,col="blue",lty=3)

fig1-1

1.1b Fit a 4th degree polynomial – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
#Read the auto data
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
# Select columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
# Convert all columns to numeric
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')

#Drop NAs
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['horsepower']]
y=autoDF3['mpg']
#Create a polynomial of degree 4
poly = PolynomialFeatures(degree=4)
X_poly = poly.fit_transform(X)

# Fit a polynomial regression line
linreg = LinearRegression().fit(X_poly, y)
# Create a range of values
hpGrid = np.arange(np.min(X),np.max(X),10)
hp=hpGrid.reshape(-1,1)
# Transform to 4th degree
poly = PolynomialFeatures(degree=4)
hp_poly = poly.fit_transform(hp)

#Create a scatter plot
plt.scatter(X,y)
# Fit the prediction
ypred=linreg.predict(hp_poly)
plt.title("Poylnomial of degree 4")
fig2=plt.xlabel("Horsepower")
fig2=plt.ylabel("MPG")
# Draw the regression curve
plt.plot(hp,ypred,c="red")
plt.savefig('fig1.png', bbox_inches='tight')

fig1

1.1c Fit a B-Spline – R Code

In the code below a B- Spline is fit to data. The B-spline requires the manual selection of knots

#Splines
library(splines)
# Fit a B-spline to the data. Select knots at 60,75,100,150
fit=lm(mpg~bs(horsepower,df=6,knots=c(60,75,100,150)),data=auto)
# Use the fitted regresion to predict
pred=predict(fit,newdata=list(horsepower=hpGrid),se=T)
# Create a scatter plot
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
     ylab="MPG", main="B-Spline with 4 knots")
#Draw lines with 2 Standard Errors on either side
lines(hpGrid,pred$fit,lwd=2)
lines(hpGrid,pred$fit+2*pred$se,lty="dashed")
lines(hpGrid,pred$fit-2*pred$se,lty="dashed")
abline(v=c(60,75,100,150),lty=2,col="darkgreen")

fig2-1

1.1d Fit a Natural Spline – R Code

Here a ‘Natural Spline’ is used to fit .The Natural Spline extrapolates beyond the boundary knots and the ends of the function are much more constrained than a regular spline or a global polynomoial where the ends can wag a lot more. Natural splines do not require the explicit selection of knots

# There is no need to select the knots here. There is a smoothing parameter which
# can be specified by the degrees of freedom 'df' parameter. The natural spline

fit2=lm(mpg~ns(horsepower,df=4),data=auto)
pred=predict(fit2,newdata=list(horsepower=hpGrid),se=T)
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
     ylab="MPG", main="Natural Splines")
lines(hpGrid,pred$fit,lwd=2)
lines(hpGrid,pred$fit+2*pred$se,lty="dashed")
lines(hpGrid,pred$fit-2*pred$se,lty="dashed")

fig3-1

1.1.e Fit a Smoothing Spline – R code

Here a smoothing spline is used. Smoothing splines also do not require the explicit setting of knots. We can change the ‘degrees of freedom(df)’ paramater to get the best fit

# Smoothing spline has a smoothing parameter, the degrees of freedom
# This is too wiggly
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
     ylab="MPG", main="Smoothing Splines")

# Here df is set to 16. This has a lot of variance
fit=smooth.spline(auto$horsepower,auto$mpg,df=16)
lines(fit,col="red",lwd=2)

# We can use Cross Validation to allow the spline to pick the value of this smpopothing paramter. We do not need to set the degrees of freedom 'df'
fit=smooth.spline(auto$horsepower,auto$mpg,cv=TRUE)
lines(fit,col="blue",lwd=2)

fig4-1

1.1e Splines – Python

There isn’t as much treatment of splines in Python and SKLearn. I did find the LSQUnivariate, UnivariateSpline spline. The LSQUnivariate spline requires the explcit setting of knots

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from scipy.interpolate import LSQUnivariateSpline
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
auto=autoDF2.dropna()
auto=auto[['horsepower','mpg']].sort_values('horsepower')

# Set the knots manually
knots=[65,75,100,150]
# Create an array for X & y
X=np.array(auto['horsepower'])
y=np.array(auto['mpg'])
# Fit a LSQunivariate spline
s = LSQUnivariateSpline(X,y,knots)

#Plot the spline
xs = np.linspace(40,230,1000)
ys = s(xs)
plt.scatter(X, y)
plt.plot(xs, ys)
plt.savefig('fig2.png', bbox_inches='tight')

fig2

1.2 Generalized Additiive models (GAMs)

Generalized Additive Models (GAMs) is a really powerful ML tool.

y_{i} = \beta_{0} + f_{1}(x_{i1}) + f_{2}(x_{i2}) + .. +f_{p}(x_{ip}) + \epsilon_{i}

In GAMs we use a different functions for each of the variables. GAMs give a much better fit since we can choose any function for the different sections

1.2a Generalized Additive Models (GAMs) – R Code

The plot below show the smooth spline that is fit for each of the features horsepower, cylinder, displacement, year and acceleration. We can use any function for example loess, 4rd order polynomial etc.

library(gam)
# Fit a smoothing spline for horsepower, cyliner, displacement and acceleration
gam=gam(mpg~s(horsepower,4)+s(cylinder,5)+s(displacement,4)+s(year,4)+s(acceleration,5),data=auto)
# Display the summary of the fit. This give the significance of each of the paramwetr
# Also an ANOVA is given for each combination of the features
summary(gam)
## 
## Call: gam(formula = mpg ~ s(horsepower, 4) + s(cylinder, 5) + s(displacement, 
##     4) + s(year, 4) + s(acceleration, 5), data = auto)
## Deviance Residuals:
##     Min      1Q  Median      3Q     Max 
## -8.3190 -1.4436 -0.0261  1.2279 12.0873 
## 
## (Dispersion Parameter for gaussian family taken to be 6.9943)
## 
##     Null Deviance: 23818.99 on 391 degrees of freedom
## Residual Deviance: 2587.881 on 370 degrees of freedom
## AIC: 1898.282 
## 
## Number of Local Scoring Iterations: 3 
## 
## Anova for Parametric Effects
##                     Df  Sum Sq Mean Sq  F value    Pr(>F)    
## s(horsepower, 4)     1 15632.8 15632.8 2235.085 < 2.2e-16 ***
## s(cylinder, 5)       1   508.2   508.2   72.666 3.958e-16 ***
## s(displacement, 4)   1   374.3   374.3   53.514 1.606e-12 ***
## s(year, 4)           1  2263.2  2263.2  323.583 < 2.2e-16 ***
## s(acceleration, 5)   1   372.4   372.4   53.246 1.809e-12 ***
## Residuals          370  2587.9     7.0                       
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Anova for Nonparametric Effects
##                    Npar Df Npar F     Pr(F)    
## (Intercept)                                    
## s(horsepower, 4)         3 13.825 1.453e-08 ***
## s(cylinder, 5)           3 17.668 9.712e-11 ***
## s(displacement, 4)       3 44.573 < 2.2e-16 ***
## s(year, 4)               3 23.364 7.183e-14 ***
## s(acceleration, 5)       4  3.848  0.004453 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
par(mfrow=c(2,3))
plot(gam,se=TRUE)

fig5-1

1.2b Generalized Additive Models (GAMs) – Python Code

I did not find the equivalent of GAMs in SKlearn in Python. There was an early prototype (2012) in Github. Looks like it is still work in progress or has probably been abandoned.

1.3 Tree based Machine Learning Models

Tree based Machine Learning are all based on the ‘bootstrapping’ technique. In bootstrapping given a sample of size N, we create datasets of size N by sampling this original dataset with replacement. Machine Learning models are built on the different bootstrapped samples and then averaged.

Decision Trees as seen above have the tendency to overfit. There are several techniques that help to avoid this namely a) Bagging b) Random Forests c) Boosting

Bagging, Random Forest and Gradient Boosting

Bagging: Bagging, or Bootstrap Aggregation decreases the variance of predictions, by creating separate Decisiion Tree based ML models on the different samples and then averaging these ML models

Random Forests: Bagging is a greedy algorithm and tries to produce splits based on all variables which try to minimize the error. However the different ML models have a high correlation. Random Forests remove this shortcoming, by using a variable and random set of features to split on. Hence the features chosen and the resulting trees are uncorrelated. When these ML models are averaged the performance is much better.

Boosting: Gradient Boosted Decision Trees also use an ensemble of trees but they don’t build Machine Learning models with random set of features at each step. Rather small and simple trees are built. Successive trees try to minimize the error from the earlier trees.

Out of Bag (OOB) Error: In Random Forest and Gradient Boosting for each bootstrap sample taken from the dataset, there will be samples left out. These are known as Out of Bag samples.Classification accuracy carried out on these OOB samples is known as OOB error

1.31a Decision Trees – R Code

The code below creates a Decision tree with the cancer training data. The summary of the fit is output. Based on the ML model, the predict function is used on test data and a confusion matrix is output.

# Read the cancer data
library(tree)
library(caret)
library(e1071)
cancer <- read.csv("cancer.csv",stringsAsFactors = FALSE)
cancer <- cancer[,2:32]
cancer$target <- as.factor(cancer$target)
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

# Create Decision Tree
cancerStatus=tree(target~.,train)
summary(cancerStatus)
## 
## Classification tree:
## tree(formula = target ~ ., data = train)
## Variables actually used in tree construction:
## [1] "worst.perimeter"      "worst.concave.points" "area.error"          
## [4] "worst.texture"        "mean.texture"         "mean.concave.points" 
## Number of terminal nodes:  9 
## Residual mean deviance:  0.1218 = 50.8 / 417 
## Misclassification error rate: 0.02347 = 10 / 426
pred <- predict(cancerStatus,newdata=test,type="class")
confusionMatrix(pred,test$target)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  0  1
##          0 49  7
##          1  8 78
##                                           
##                Accuracy : 0.8944          
##                  95% CI : (0.8318, 0.9397)
##     No Information Rate : 0.5986          
##     P-Value [Acc > NIR] : 4.641e-15       
##                                           
##                   Kappa : 0.7795          
##  Mcnemar's Test P-Value : 1               
##                                           
##             Sensitivity : 0.8596          
##             Specificity : 0.9176          
##          Pos Pred Value : 0.8750          
##          Neg Pred Value : 0.9070          
##              Prevalence : 0.4014          
##          Detection Rate : 0.3451          
##    Detection Prevalence : 0.3944          
##       Balanced Accuracy : 0.8886          
##                                           
##        'Positive' Class : 0               
## 
# Plot decision tree with labels
plot(cancerStatus)
text(cancerStatus,pretty=0)

fig6-1

1.31b Decision Trees – Cross Validation – R Code

We can also perform a Cross Validation on the data to identify the Decision Tree which will give the minimum deviance.

library(tree)
cancer <- read.csv("cancer.csv",stringsAsFactors = FALSE)
cancer <- cancer[,2:32]
cancer$target <- as.factor(cancer$target)
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

# Create Decision Tree
cancerStatus=tree(target~.,train)

# Execute 10 fold cross validation
cvCancer=cv.tree(cancerStatus)
plot(cvCancer)

fig7-1

# Plot the 
plot(cvCancer$size,cvCancer$dev,type='b')

fig1

prunedCancer=prune.tree(cancerStatus,best=4)
plot(prunedCancer)
text(prunedCancer,pretty=0)

fig2

pred <- predict(prunedCancer,newdata=test,type="class")
confusionMatrix(pred,test$target)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  0  1
##          0 50  7
##          1  7 78
##                                          
##                Accuracy : 0.9014         
##                  95% CI : (0.8401, 0.945)
##     No Information Rate : 0.5986         
##     P-Value [Acc > NIR] : 7.988e-16      
##                                          
##                   Kappa : 0.7948         
##  Mcnemar's Test P-Value : 1              
##                                          
##             Sensitivity : 0.8772         
##             Specificity : 0.9176         
##          Pos Pred Value : 0.8772         
##          Neg Pred Value : 0.9176         
##              Prevalence : 0.4014         
##          Detection Rate : 0.3521         
##    Detection Prevalence : 0.4014         
##       Balanced Accuracy : 0.8974         
##                                          
##        'Positive' Class : 0              
## 

1.31c Decision Trees – Python Code

Below is the Python code for creating Decision Trees. The accuracy, precision, recall and F1 score is computed on the test data set.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn import tree
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_classification, make_blobs
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
import graphviz 

cancer = load_breast_cancer()
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)

X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
                                                   random_state = 0)
clf = DecisionTreeClassifier().fit(X_train, y_train)

print('Accuracy of Decision Tree classifier on training set: {:.2f}'
     .format(clf.score(X_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'
     .format(clf.score(X_test, y_test)))

y_predicted=clf.predict(X_test)
confusion = confusion_matrix(y_test, y_predicted)
print('Accuracy: {:.2f}'.format(accuracy_score(y_test, y_predicted)))
print('Precision: {:.2f}'.format(precision_score(y_test, y_predicted)))
print('Recall: {:.2f}'.format(recall_score(y_test, y_predicted)))
print('F1: {:.2f}'.format(f1_score(y_test, y_predicted)))

# Plot the Decision Tree
clf = DecisionTreeClassifier(max_depth=2).fit(X_train, y_train)
dot_data = tree.export_graphviz(clf, out_file=None, 
                         feature_names=cancer.feature_names,  
                         class_names=cancer.target_names,  
                         filled=True, rounded=True,  
                         special_characters=True)  
graph = graphviz.Source(dot_data)  
graph
## Accuracy of Decision Tree classifier on training set: 1.00
## Accuracy of Decision Tree classifier on test set: 0.87
## Accuracy: 0.87
## Precision: 0.97
## Recall: 0.82
## F1: 0.89

tree

1.31d Decision Trees – Cross Validation – Python Code

In the code below 5-fold cross validation is performed for different depths of the tree and the accuracy is computed. The accuracy on the test set seems to plateau when the depth is 8. But it is seen to increase again from 10 to 12. More analysis needs to be done here


import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
from sklearn.tree import DecisionTreeClassifier
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
from sklearn.cross_validation import train_test_split, KFold
def computeCVAccuracy(X,y,folds):
    accuracy=[]
    foldAcc=[]
    depth=[1,2,3,4,5,6,7,8,9,10,11,12]
    nK=len(X)/float(folds)
    xval_err=0
    for i in depth: 
        kf = KFold(len(X),n_folds=folds)
        for train_index, test_index in kf:
            X_train, X_test = X.iloc[train_index], X.iloc[test_index]
            y_train, y_test = y.iloc[train_index], y.iloc[test_index]  
            clf = DecisionTreeClassifier(max_depth = i).fit(X_train, y_train)
            score=clf.score(X_test, y_test)
            accuracy.append(score)     
            
        foldAcc.append(np.mean(accuracy))  
        
    return(foldAcc)
    
    
cvAccuracy=computeCVAccuracy(pd.DataFrame(X_cancer),pd.DataFrame(y_cancer),folds=10)

df1=pd.DataFrame(cvAccuracy)
df1.columns=['cvAccuracy']
df=df1.reindex([1,2,3,4,5,6,7,8,9,10,11,12])
df.plot()
plt.title("Decision Tree - 10-fold Cross Validation Accuracy vs Depth of tree")
plt.xlabel("Depth of tree")
plt.ylabel("Accuracy")
plt.savefig('fig3.png', bbox_inches='tight')

 

 

fig3

 

1.4a Random Forest – R code

A Random Forest is fit using the Boston data. The summary shows that 4 variables were randomly chosen at each split and the resulting ML model explains 88.72% of the test data. Also the variable importance is plotted. It can be seen that ‘rooms’ and ‘status’ are the most influential features in the model

library(randomForest)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL

# Select specific columns
Boston <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",                          "distances","highways","tax","teacherRatio","color",
                               "status","medianValue")

# Fit a Random Forest on the Boston training data
rfBoston=randomForest(medianValue~.,data=Boston)
# Display the summatu of the fit. It can be seen that the MSE is 10.88 
# and the percentage variance explained is 86.14%. About 4 variables were tried at each # #split for a maximum tree of 500.
# The MSE and percent variance is on Out of Bag trees
rfBoston
## 
## Call:
##  randomForest(formula = medianValue ~ ., data = Boston) 
##                Type of random forest: regression
##                      Number of trees: 500
## No. of variables tried at each split: 4
## 
##           Mean of squared residuals: 9.521672
##                     % Var explained: 88.72
#List and plot the variable importances
importance(rfBoston)
##              IncNodePurity
## crimeRate        2602.1550
## zone              258.8057
## indus            2599.6635
## charles           240.2879
## nox              2748.8485
## rooms           12011.6178
## age              1083.3242
## distances        2432.8962
## highways          393.5599
## tax              1348.6987
## teacherRatio     2841.5151
## color             731.4387
## status          12735.4046
varImpPlot(rfBoston)

fig8-1

1.4b Random Forest-OOB and Cross Validation Error – R code

The figure below shows the OOB error and the Cross Validation error vs the ‘mtry’. Here mtry indicates the number of random features that are chosen at each split. The lowest test error occurs when mtry = 8

library(randomForest)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL

# Select specific columns
Boston <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",                          "distances","highways","tax","teacherRatio","color",
                               "status","medianValue")
# Split as training and tst sets
train_idx <- trainTestSplit(Boston,trainPercent=75,seed=5)
train <- Boston[train_idx, ]
test <- Boston[-train_idx, ]

#Initialize OOD and testError
oobError <- NULL
testError <- NULL
# In the code below the number of variables to consider at each split is increased
# from 1 - 13(max features) and the OOB error and the MSE is computed
for(i in 1:13){
    fitRF=randomForest(medianValue~.,data=train,mtry=i,ntree=400)
    oobError[i] <-fitRF$mse[400]
    pred <- predict(fitRF,newdata=test)
    testError[i] <- mean((pred-test$medianValue)^2)
}

# We can see the OOB and Test Error. It can be seen that the Random Forest performs
# best with the lowers MSE at mtry=6
matplot(1:13,cbind(testError,oobError),pch=19,col=c("red","blue"),
        type="b",xlab="mtry(no of varaibles at each split)", ylab="Mean Squared Error",
        main="Random Forest - OOB and Test Error")
legend("topright",legend=c("OOB","Test"),pch=19,col=c("red","blue"))

fig9-1

1.4c Random Forest – Python code

The python code for Random Forest Regression is shown below. The training and test score is computed. The variable importance shows that ‘rooms’ and ‘status’ are the most influential of the variables

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
       'teacherRatio','color','status']]
y=df['medianValue']

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)

regr = RandomForestRegressor(max_depth=4, random_state=0)
regr.fit(X_train, y_train)

print('R-squared score (training): {:.3f}'
     .format(regr.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'
     .format(regr.score(X_test, y_test)))

feature_names=['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
       'teacherRatio','color','status']
print(regr.feature_importances_)
plt.figure(figsize=(10,6),dpi=80)
c_features=X_train.shape[1]
plt.barh(np.arange(c_features),regr.feature_importances_)
plt.xlabel("Feature importance")
plt.ylabel("Feature name")

plt.yticks(np.arange(c_features), feature_names)
plt.tight_layout()

plt.savefig('fig4.png', bbox_inches='tight')
## R-squared score (training): 0.917
## R-squared score (test): 0.734
## [ 0.03437382  0.          0.00580335  0.          0.00731004  0.36461548
##   0.00638577  0.03432173  0.0041244   0.01732328  0.01074148  0.0012638
##   0.51373683]

fig4

1.4d Random Forest – Cross Validation and OOB Error – Python code

As with R the ‘max_features’ determines the random number of features the random forest will use at each split. The plot shows that when max_features=8 the MSE is lowest

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
       'teacherRatio','color','status']]
y=df['medianValue']

cvError=[]
oobError=[]
oobMSE=[]
for i in range(1,13):
    regr = RandomForestRegressor(max_depth=4, n_estimators=400,max_features=i,oob_score=True,random_state=0)
    mse= np.mean(cross_val_score(regr, X, y, cv=5,scoring = 'neg_mean_squared_error'))
    # Since this is neg_mean_squared_error I have inverted the sign to get MSE
    cvError.append(-mse)
    # Fit on all data to compute OOB error
    regr.fit(X, y)
    # Record the OOB error for each `max_features=i` setting
    oob = 1 - regr.oob_score_
    oobError.append(oob)
    # Get the Out of Bag prediction
    oobPred=regr.oob_prediction_ 
    # Compute the Mean Squared Error between OOB Prediction and target
    mseOOB=np.mean(np.square(oobPred-y))
    oobMSE.append(mseOOB)

# Plot the CV Error and OOB Error
# Set max_features
maxFeatures=np.arange(1,13) 
cvError=pd.DataFrame(cvError,index=maxFeatures)
oobMSE=pd.DataFrame(oobMSE,index=maxFeatures)
#Plot
fig8=df.plot()
fig8=plt.title('Random forest - CV Error and OOB Error vs max_features')
fig8.figure.savefig('fig8.png', bbox_inches='tight')

#Plot the OOB Error vs max_features
plt.plot(range(1,13),oobError)
fig2=plt.title("Random Forest - OOB Error vs max_features (variable no of features)")
fig2=plt.xlabel("max_features (variable no of features)")
fig2=plt.ylabel("OOB Error")
fig2.figure.savefig('fig7.png', bbox_inches='tight')

fig8 fig7

1.5a Boosting – R code

Here a Gradient Boosted ML Model is built with a n.trees=5000, with a learning rate of 0.01 and depth of 4. The feature importance plot also shows that rooms and status are the 2 most important features. The MSE vs the number of trees plateaus around 2000 trees

library(gbm)
# Perform gradient boosting on the Boston data set. The distribution is gaussian since we
# doing MSE. The interaction depth specifies the number of splits
boostBoston=gbm(medianValue~.,data=train,distribution="gaussian",n.trees=5000,
                shrinkage=0.01,interaction.depth=4)
#The summary gives the variable importance. The 2 most significant variables are
# number of rooms and lower status
summary(boostBoston)

##                       var    rel.inf
## rooms               rooms 42.2267200
## status             status 27.3024671
## distances       distances  7.9447972
## crimeRate       crimeRate  5.0238827
## nox                   nox  4.0616548
## teacherRatio teacherRatio  3.1991999
## age                   age  2.7909772
## color               color  2.3436295
## tax                   tax  2.1386213
## charles           charles  1.3799109
## highways         highways  0.7644026
## indus               indus  0.7236082
## zone                 zone  0.1001287
# The plots below show how each variable relates to the median value of the home. As
# the number of roomd increase the median value increases and with increase in lower status
# the median value decreases
par(mfrow=c(1,2))
#Plot the relation between the top 2 features and the target
plot(boostBoston,i="rooms")
plot(boostBoston,i="status")

fig10-2

# Create a sequence of trees between 100-5000 incremented by 50
nTrees=seq(100,5000,by=50)
# Predict the values for the test data
pred <- predict(boostBoston,newdata=test,n.trees=nTrees)
# Compute the mean for each of the MSE for each of the number of trees 
boostError <- apply((pred-test$medianValue)^2,2,mean)
#Plot the MSE vs the number of trees
plot(nTrees,boostError,pch=19,col="blue",ylab="Mean Squared Error",
     main="Boosting Test Error")

fig10-3

1.5b Cross Validation Boosting – R code

Included below is a cross validation error vs the learning rate. The lowest error is when learning rate = 0.09

cvError <- NULL
s <- c(.001,0.01,0.03,0.05,0.07,0.09,0.1)
for(i in seq_along(s)){
    cvBoost=gbm(medianValue~.,data=train,distribution="gaussian",n.trees=5000,
                shrinkage=s[i],interaction.depth=4,cv.folds=5)
    cvError[i] <- mean(cvBoost$cv.error)
}

# Create a data frame for plotting
a <- rbind(s,cvError)
b <- as.data.frame(t(a))
# It can be seen that a shrinkage parameter of 0,05 gives the lowes CV Error
ggplot(b,aes(s,cvError)) + geom_point() + geom_line(color="blue") + 
    xlab("Shrinkage") + ylab("Cross Validation Error") +
    ggtitle("Gradient boosted trees - Cross Validation error vs Shrinkage")

fig11-1

1.5c Boosting – Python code

A gradient boost ML model in Python is created below. The Rsquared score is computed on the training and test data.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
       'teacherRatio','color','status']]
y=df['medianValue']

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)

regr = GradientBoostingRegressor()
regr.fit(X_train, y_train)

print('R-squared score (training): {:.3f}'
     .format(regr.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'
     .format(regr.score(X_test, y_test)))
## R-squared score (training): 0.983
## R-squared score (test): 0.821

1.5c Cross Validation Boosting – Python code

the cross validation error is computed as the learning rate is varied. The minimum CV eror occurs when lr = 0.04

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
       'teacherRatio','color','status']]
y=df['medianValue']

cvError=[]
learning_rate =[.001,0.01,0.03,0.05,0.07,0.09,0.1]
for lr in learning_rate:
    regr = GradientBoostingRegressor(max_depth=4, n_estimators=400,learning_rate  =lr,random_state=0)
    mse= np.mean(cross_val_score(regr, X, y, cv=10,scoring = 'neg_mean_squared_error'))
    # Since this is neg_mean_squared_error I have inverted the sign to get MSE
    cvError.append(-mse)
learning_rate =[.001,0.01,0.03,0.05,0.07,0.09,0.1]
plt.plot(learning_rate,cvError)
plt.title("Gradient Boosting - 5-fold CV- Mean Squared Error vs max_features (variable no of features)")
plt.xlabel("max_features (variable no of features)")
plt.ylabel("Mean Squared Error")
plt.savefig('fig6.png', bbox_inches='tight')

fig6

Conclusion This post covered Splines and Tree based ML models like Bagging, Random Forest and Boosting. Stay tuned for further updates.

You may also like

  1. Re-introducing cricketr! : An R package to analyze performances of cricketer
  2. Designing a Social Web Portal
  3. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
  4. My TEDx talk on the “Internet of Things”
  5. Singularity
  6. A closer look at “Robot Horse on a Trot” in Android

To see all posts see Index of posts

Practical Machine Learning with R and Python – Part 3


In this post ‘Practical Machine Learning with R and Python – Part 3’,  I discuss ‘Feature Selection’ methods. This post is a continuation of my 2 earlier posts

  1. Practical Machine Learning with R and Python – Part 1
  2. Practical Machine Learning with R and Python – Part 2

While applying Machine Learning techniques, the data set will usually include a large number of predictors for a target variable. It is quite likely, that not all the predictors or feature variables will have an impact on the output. Hence it is becomes necessary to choose only those features which influence the output variable thus simplifying  to a reduced feature set on which to train the ML model on. The techniques that are used are the following

  • Best fit
  • Forward fit
  • Backward fit
  • Ridge Regression or L2 regularization
  • Lasso or L1 regularization

This post includes the equivalent ML code in R and Python.

All these methods remove those features which do not sufficiently influence the output. As in my previous 2 posts on “Practical Machine Learning with R and Python’, this post is largely based on the topics in the following 2 MOOC courses
1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford
2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera

You can download this R Markdown file and the associated data from Github – Machine Learning-RandPython-Part3. 

The content of this post and much more is now available as a compact book  on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo

1.1 Best Fit

For a dataset with features f1,f2,f3…fn, the ‘Best fit’ approach, chooses all possible combinations of features and creates separate ML models for each of the different combinations. The best fit algotithm then uses some filtering criteria based on Adj Rsquared, Cp, BIC or AIC to pick out the best model among all models.

Since the Best Fit approach searches the entire solution space it is computationally infeasible. The number of models that have to be searched increase exponentially as the number of predictors increase. For ‘p’ predictors a total of 2^{p} ML models have to be searched. This can be shown as follows

There are C_{1} ways to choose single feature ML models among ‘n’ features, C_{2} ways to choose 2 feature models among ‘n’ models and so on, or
1+C_{1} + C_{2} +... + C_{n}
= Total number of models in Best Fit.  Since from Binomial theorem we have
(1+x)^{n} = 1+C_{1}x + C_{2}x^{2} +... + C_{n}x^{n}
When x=1 in the equation (1) above, this becomes
2^{n} = 1+C_{1} + C_{2} +... + C_{n}

Hence there are 2^{n} models to search amongst in Best Fit. For 10 features this is 2^{10} or ~1000 models and for 40 features this becomes 2^{40} which almost 1 trillion. Usually there are datasets with 1000 or maybe even 100000 features and Best fit becomes computationally infeasible.

Anyways I have included the Best Fit approach as I use the Boston crime datasets which is available both the MASS package in R and Sklearn in Python and it has 13 features. Even this small feature set takes a bit of time since the Best fit needs to search among ~2^{13}= 8192  models

Initially I perform a simple Linear Regression Fit to estimate the features that are statistically insignificant. By looking at the p-values of the features it can be seen that ‘indus’ and ‘age’ features have high p-values and are not significant

1.1a Linear Regression – R code

source('RFunctions-1.R')
#Read the Boston crime data
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
# Rename the columns
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
              "distances","highways","tax","teacherRatio","color","status","cost")
# Select specific columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
                            "distances","highways","tax","teacherRatio","color","status","cost")
dim(df1)
## [1] 506  14
# Linear Regression fit
fit <- lm(cost~. ,data=df1)
summary(fit)
## 
## Call:
## lm(formula = cost ~ ., data = df1)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -15.595  -2.730  -0.518   1.777  26.199 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   3.646e+01  5.103e+00   7.144 3.28e-12 ***
## crimeRate    -1.080e-01  3.286e-02  -3.287 0.001087 ** 
## zone          4.642e-02  1.373e-02   3.382 0.000778 ***
## indus         2.056e-02  6.150e-02   0.334 0.738288    
## charles       2.687e+00  8.616e-01   3.118 0.001925 ** 
## nox          -1.777e+01  3.820e+00  -4.651 4.25e-06 ***
## rooms         3.810e+00  4.179e-01   9.116  < 2e-16 ***
## age           6.922e-04  1.321e-02   0.052 0.958229    
## distances    -1.476e+00  1.995e-01  -7.398 6.01e-13 ***
## highways      3.060e-01  6.635e-02   4.613 5.07e-06 ***
## tax          -1.233e-02  3.760e-03  -3.280 0.001112 ** 
## teacherRatio -9.527e-01  1.308e-01  -7.283 1.31e-12 ***
## color         9.312e-03  2.686e-03   3.467 0.000573 ***
## status       -5.248e-01  5.072e-02 -10.347  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 4.745 on 492 degrees of freedom
## Multiple R-squared:  0.7406, Adjusted R-squared:  0.7338 
## F-statistic: 108.1 on 13 and 492 DF,  p-value: < 2.2e-16

Next we apply the different feature selection models to automatically remove features that are not significant below

1.1a Best Fit – R code

The Best Fit requires the ‘leaps’ R package

library(leaps)
source('RFunctions-1.R')
#Read the Boston crime data
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
# Rename the columns
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
              "distances","highways","tax","teacherRatio","color","status","cost")
# Select specific columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
                            "distances","highways","tax","teacherRatio","color","status","cost")

# Perform a best fit
bestFit=regsubsets(cost~.,df1,nvmax=13)

# Generate a summary of the fit
bfSummary=summary(bestFit)

# Plot the Residual Sum of Squares vs number of variables 
plot(bfSummary$rss,xlab="Number of Variables",ylab="RSS",type="l",main="Best fit RSS vs No of features")
# Get the index of the minimum value
a=which.min(bfSummary$rss)
# Mark this in red
points(a,bfSummary$rss[a],col="red",cex=2,pch=20)

The plot below shows that the Best fit occurs with all 13 features included. Notice that there is no significant change in RSS from 11 features onward.

# Plot the CP statistic vs Number of variables
plot(bfSummary$cp,xlab="Number of Variables",ylab="Cp",type='l',main="Best fit Cp vs No of features")
# Find the lowest CP value
b=which.min(bfSummary$cp)
# Mark this in red
points(b,bfSummary$cp[b],col="red",cex=2,pch=20)

Based on Cp metric the best fit occurs at 11 features as seen below. The values of the coefficients are also included below

# Display the set of features which provide the best fit
coef(bestFit,b)
##   (Intercept)     crimeRate          zone       charles           nox 
##  36.341145004  -0.108413345   0.045844929   2.718716303 -17.376023429 
##         rooms     distances      highways           tax  teacherRatio 
##   3.801578840  -1.492711460   0.299608454  -0.011777973  -0.946524570 
##         color        status 
##   0.009290845  -0.522553457
#  Plot the BIC value
plot(bfSummary$bic,xlab="Number of Variables",ylab="BIC",type='l',main="Best fit BIC vs No of Features")
# Find and mark the min value
c=which.min(bfSummary$bic)
points(c,bfSummary$bic[c],col="red",cex=2,pch=20)

# R has some other good plots for best fit
plot(bestFit,scale="r2",main="Rsquared vs No Features")

R has the following set of really nice visualizations. The plot below shows the Rsquared for a set of predictor variables. It can be seen when Rsquared starts at 0.74- indus, charles and age have not been included. 

plot(bestFit,scale="Cp",main="Cp vs NoFeatures")

The Cp plot below for value shows indus, charles and age as not included in the Best fit

plot(bestFit,scale="bic",main="BIC vs Features")

1.1b Best fit (Exhaustive Search ) – Python code

The Python package for performing a Best Fit is the Exhaustive Feature Selector EFS.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from mlxtend.feature_selection import ExhaustiveFeatureSelector as EFS

# Read the Boston crime data
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")

#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
# Set X and y 
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']

# Perform an Exhaustive Search. The EFS and SFS packages use 'neg_mean_squared_error'. The 'mean_squared_error' seems to have been deprecated. I think this is just the MSE with the a negative sign.
lr = LinearRegression()
efs1 = EFS(lr, 
           min_features=1,
           max_features=13,
           scoring='neg_mean_squared_error',
           print_progress=True,
           cv=5)


# Create a efs fit
efs1 = efs1.fit(X.as_matrix(), y.as_matrix())

print('Best negtive mean squared error: %.2f' % efs1.best_score_)
## Print the IDX of the best features 
print('Best subset:', efs1.best_idx_)
Features: 8191/8191Best negtive mean squared error: -28.92
## ('Best subset:', (0, 1, 4, 6, 7, 8, 9, 10, 11, 12))

The indices for the best subset are shown above.

1.2 Forward fit

Forward fit is a greedy algorithm that tries to optimize the feature selected, by minimizing the selection criteria (adj Rsqaured, Cp, AIC or BIC) at every step. For a dataset with features f1,f2,f3…fn, the forward fit starts with the NULL set. It then pick the ML model with a single feature from n features which has the highest adj Rsquared, or minimum Cp, BIC or some such criteria. After picking the 1 feature from n which satisfies the criteria the most, the next feature from the remaining n-1 features is chosen. When the 2 feature model which satisfies the selection criteria the best is chosen, another feature from the remaining n-2 features are added and so on. The forward fit is a sub-optimal algorithm. There is no guarantee that the final list of features chosen will be the best among the lot. The computation required for this is of  n + n-1 + n -2 + .. 1 = n(n+1)/2 which is of the order of n^{2}. Though forward fit is a sub optimal solution it is far more computationally efficient than best fit

1.2a Forward fit – R code

Forward fit in R determines that 11 features are required for the best fit. The features are shown below

library(leaps)
# Read the data
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
# Rename the columns
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
              "distances","highways","tax","teacherRatio","color","status","cost")

# Select columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
                     "distances","highways","tax","teacherRatio","color","status","cost")

#Split as training and test 
train_idx <- trainTestSplit(df1,trainPercent=75,seed=5)
train <- df1[train_idx, ]
test <- df1[-train_idx, ]

# Find the best forward fit
fitFwd=regsubsets(cost~.,data=train,nvmax=13,method="forward")

# Compute the MSE
valErrors=rep(NA,13)
test.mat=model.matrix(cost~.,data=test)
for(i in 1:13){
    coefi=coef(fitFwd,id=i)
    pred=test.mat[,names(coefi)]%*%coefi
    valErrors[i]=mean((test$cost-pred)^2)
}

# Plot the Residual Sum of Squares
plot(valErrors,xlab="Number of Variables",ylab="Validation Error",type="l",main="Forward fit RSS vs No of features")
# Gives the index of the minimum value
a<-which.min(valErrors)
print(a)
## [1] 11
# Highlight the smallest value
points(c,valErrors[a],col="blue",cex=2,pch=20)

Forward fit R selects 11 predictors as the best ML model to predict the ‘cost’ output variable. The values for these 11 predictors are included below

#Print the 11 ccoefficients
coefi=coef(fitFwd,id=i)
coefi
##   (Intercept)     crimeRate          zone         indus       charles 
##  2.397179e+01 -1.026463e-01  3.118923e-02  1.154235e-04  3.512922e+00 
##           nox         rooms           age     distances      highways 
## -1.511123e+01  4.945078e+00 -1.513220e-02 -1.307017e+00  2.712534e-01 
##           tax  teacherRatio         color        status 
## -1.330709e-02 -8.182683e-01  1.143835e-02 -3.750928e-01

1.2b Forward fit with Cross Validation – R code

The Python package SFS includes N Fold Cross Validation errors for forward and backward fit so I decided to add this code to R. This is not available in the ‘leaps’ R package, however the implementation is quite simple. Another implementation is also available at Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford 2.

library(dplyr)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
              "distances","highways","tax","teacherRatio","color","status","cost")

# Select columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
                     "distances","highways","tax","teacherRatio","color","status","cost")

set.seed(6)
# Set max number of features
nvmax<-13
cvError <- NULL
# Loop through each features
for(i in 1:nvmax){
    # Set no of folds
    noFolds=5
    # Create the rows which fall into different folds from 1..noFolds
    folds = sample(1:noFolds, nrow(df1), replace=TRUE) 
    cv<-0
    # Loop through the folds
    for(j in 1:noFolds){
        # The training is all rows for which the row is != j (k-1 folds -> training)
        train <- df1[folds!=j,]
        # The rows which have j as the index become the test set
        test <- df1[folds==j,]
        # Create a forward fitting model for this
        fitFwd=regsubsets(cost~.,data=train,nvmax=13,method="forward")
        # Select the number of features and get the feature coefficients
        coefi=coef(fitFwd,id=i)
        #Get the value of the test data
        test.mat=model.matrix(cost~.,data=test)
        # Multiply the tes data with teh fitted coefficients to get the predicted value
        # pred = b0 + b1x1+b2x2... b13x13
        pred=test.mat[,names(coefi)]%*%coefi
        # Compute mean squared error
        rss=mean((test$cost - pred)^2)
        # Add all the Cross Validation errors
        cv=cv+rss
    }
    # Compute the average of MSE for K folds for number of features 'i'
    cvError[i]=cv/noFolds
}
a <- seq(1,13)
d <- as.data.frame(t(rbind(a,cvError)))
names(d) <- c("Features","CVError")
#Plot the CV Error vs No of Features
ggplot(d,aes(x=Features,y=CVError),color="blue") + geom_point() + geom_line(color="blue") +
    xlab("No of features") + ylab("Cross Validation Error") +
    ggtitle("Forward Selection - Cross Valdation Error vs No of Features")

Forward fit with 5 fold cross validation indicates that all 13 features are required

# This gives the index of the minimum value
a=which.min(cvError)
print(a)
## [1] 13
#Print the 13 coefficients of these features
coefi=coef(fitFwd,id=a)
coefi
##   (Intercept)     crimeRate          zone         indus       charles 
##  36.650645380  -0.107980979   0.056237669   0.027016678   4.270631466 
##           nox         rooms           age     distances      highways 
## -19.000715500   3.714720418   0.019952654  -1.472533973   0.326758004 
##           tax  teacherRatio         color        status 
##  -0.011380750  -0.972862622   0.009549938  -0.582159093

1.2c Forward fit – Python code

The Backward Fit in Python uses the Sequential feature selection (SFS) package (SFS)(https://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/)

Note: The Cross validation error for SFS in Sklearn is negative, possibly because it computes the ‘neg_mean_squared_error’. The earlier ‘mean_squared_error’ in the package seems to have been deprecated. I have taken the -ve of this neg_mean_squared_error. I think this would give mean_squared_error.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib.pyplot as plt
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from sklearn.linear_model import LinearRegression


df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
lr = LinearRegression()
# Create a forward fit model
sfs = SFS(lr, 
          k_features=(1,13), 
          forward=True, # Forward fit
          floating=False, 
          scoring='neg_mean_squared_error',
          cv=5)

# Fit this on the data
sfs = sfs.fit(X.as_matrix(), y.as_matrix())
# Get all the details of the forward fits
a=sfs.get_metric_dict()
n=[]
o=[]

# Compute the mean cross validation scores
for i in np.arange(1,13):
    n.append(-np.mean(a[i]['cv_scores']))  
m=np.arange(1,13)
# Get the index of the minimum CV score

# Plot the CV scores vs the number of features
fig1=plt.plot(m,n)
fig1=plt.title('Mean CV Scores vs No of features')
fig1.figure.savefig('fig1.png', bbox_inches='tight')

print(pd.DataFrame.from_dict(sfs.get_metric_dict(confidence_interval=0.90)).T)

idx = np.argmin(n)
print "No of features=",idx
#Get the features indices for the best forward fit and convert to list
b=list(a[idx]['feature_idx'])
print(b)

# Index the column names. 
# Features from forward fit
print("Features selected in forward fit")
print(X.columns[b])
##    avg_score ci_bound                                          cv_scores  \
## 1   -42.6185  19.0465  [-23.5582499971, -41.8215743748, -73.993608929...   
## 2   -36.0651  16.3184  [-18.002498199, -40.1507894517, -56.5286659068...   
## 3   -34.1001    20.87  [-9.43012884381, -25.9584955394, -36.184188174...   
## 4   -33.7681  20.1638  [-8.86076528781, -28.650217633, -35.7246353855...   
## 5   -33.6392  20.5271  [-8.90807628524, -28.0684679108, -35.827463022...   
## 6   -33.6276  19.0859  [-9.549485942, -30.9724602876, -32.6689523347,...   
## 7   -32.4082  19.1455  [-10.0177149635, -28.3780298492, -30.926917231...   
## 8   -32.3697   18.533  [-11.1431684243, -27.5765510172, -31.168994094...   
## 9   -32.4016  21.5561  [-10.8972555995, -25.739780653, -30.1837430353...   
## 10  -32.8504  22.6508  [-12.3909282079, -22.1533250755, -33.385407342...   
## 11  -34.1065  24.7019  [-12.6429253721, -22.1676650245, -33.956999528...   
## 12  -35.5814   25.693  [-12.7303397453, -25.0145323483, -34.211898373...   
## 13  -37.1318  23.2657  [-12.4603005692, -26.0486211062, -33.074137979...   
## 
##                                    feature_idx  std_dev  std_err  
## 1                                        (12,)  18.9042  9.45212  
## 2                                     (10, 12)  16.1965  8.09826  
## 3                                  (10, 12, 5)  20.7142  10.3571  
## 4                               (10, 3, 12, 5)  20.0132  10.0066  
## 5                            (0, 10, 3, 12, 5)  20.3738  10.1869  
## 6                         (0, 3, 5, 7, 10, 12)  18.9433  9.47167  
## 7                      (0, 2, 3, 5, 7, 10, 12)  19.0026  9.50128  
## 8                   (0, 1, 2, 3, 5, 7, 10, 12)  18.3946  9.19731  
## 9               (0, 1, 2, 3, 5, 7, 10, 11, 12)  21.3952  10.6976  
## 10           (0, 1, 2, 3, 4, 5, 7, 10, 11, 12)  22.4816  11.2408  
## 11        (0, 1, 2, 3, 4, 5, 6, 7, 10, 11, 12)  24.5175  12.2587  
## 12     (0, 1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12)  25.5012  12.7506  
## 13  (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)  23.0919   11.546  
## No of features= 7
## [0, 2, 3, 5, 7, 10, 12]
## #################################################################################
## Features selected in forward fit
## Index([u'crimeRate', u'indus', u'chasRiver', u'rooms', u'distances',
##        u'teacherRatio', u'status'],
##       dtype='object')

The table above shows the average score, 10 fold CV errors, the features included at every step, std. deviation and std. error

The above plot indicates that 8 features provide the lowest Mean CV error

1.3 Backward Fit

Backward fit belongs to the class of greedy algorithms which tries to optimize the feature set, by dropping a feature at every stage which results in the worst performance for a given criteria of Adj RSquared, Cp, BIC or AIC. For a dataset with features f1,f2,f3…fn, the backward fit starts with the all the features f1,f2.. fn to begin with. It then pick the ML model with a n-1 features by dropping the feature,f_{j}, for e.g., the inclusion of which results in the worst performance in adj Rsquared, or minimum Cp, BIC or some such criteria. At every step 1 feature is dopped. There is no guarantee that the final list of features chosen will be the best among the lot. The computation required for this is of n + n-1 + n -2 + .. 1 = n(n+1)/2 which is of the order of n^{2}. Though backward fit is a sub optimal solution it is far more computationally efficient than best fit

1.3a Backward fit – R code

library(dplyr)
# Read the data
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
# Rename the columns
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
              "distances","highways","tax","teacherRatio","color","status","cost")

# Select columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
                     "distances","highways","tax","teacherRatio","color","status","cost")

set.seed(6)
# Set max number of features
nvmax<-13
cvError <- NULL
# Loop through each features
for(i in 1:nvmax){
    # Set no of folds
    noFolds=5
    # Create the rows which fall into different folds from 1..noFolds
    folds = sample(1:noFolds, nrow(df1), replace=TRUE) 
    cv<-0
    for(j in 1:noFolds){
        # The training is all rows for which the row is != j 
        train <- df1[folds!=j,]
        # The rows which have j as the index become the test set
        test <- df1[folds==j,]
        # Create a backward fitting model for this
        fitFwd=regsubsets(cost~.,data=train,nvmax=13,method="backward")
        # Select the number of features and get the feature coefficients
        coefi=coef(fitFwd,id=i)
        #Get the value of the test data
        test.mat=model.matrix(cost~.,data=test)
        # Multiply the tes data with teh fitted coefficients to get the predicted value
        # pred = b0 + b1x1+b2x2... b13x13
        pred=test.mat[,names(coefi)]%*%coefi
        # Compute mean squared error
        rss=mean((test$cost - pred)^2)
        # Add the Residual sum of square
        cv=cv+rss
    }
    # Compute the average of MSE for K folds for number of features 'i'
    cvError[i]=cv/noFolds
}
a <- seq(1,13)
d <- as.data.frame(t(rbind(a,cvError)))
names(d) <- c("Features","CVError")
# Plot the Cross Validation Error vs Number of features
ggplot(d,aes(x=Features,y=CVError),color="blue") + geom_point() + geom_line(color="blue") +
    xlab("No of features") + ylab("Cross Validation Error") +
    ggtitle("Backward Selection - Cross Valdation Error vs No of Features")

# This gives the index of the minimum value
a=which.min(cvError)
print(a)
## [1] 13
#Print the 13 coefficients of these features
coefi=coef(fitFwd,id=a)
coefi
##   (Intercept)     crimeRate          zone         indus       charles 
##  36.650645380  -0.107980979   0.056237669   0.027016678   4.270631466 
##           nox         rooms           age     distances      highways 
## -19.000715500   3.714720418   0.019952654  -1.472533973   0.326758004 
##           tax  teacherRatio         color        status 
##  -0.011380750  -0.972862622   0.009549938  -0.582159093

Backward selection in R also indicates the 13 features and the corresponding coefficients as providing the best fit

1.3b Backward fit – Python code

The Backward Fit in Python uses the Sequential feature selection (SFS) package (SFS)(https://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/)

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib.pyplot as plt
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from sklearn.linear_model import LinearRegression

# Read the data
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
lr = LinearRegression()

# Create the SFS model
sfs = SFS(lr, 
          k_features=(1,13), 
          forward=False, # Backward
          floating=False, 
          scoring='neg_mean_squared_error',
          cv=5)

# Fit the model
sfs = sfs.fit(X.as_matrix(), y.as_matrix())
a=sfs.get_metric_dict()
n=[]
o=[]

# Compute the mean of the validation scores
for i in np.arange(1,13):
    n.append(-np.mean(a[i]['cv_scores'])) 
m=np.arange(1,13)

# Plot the Validation scores vs number of features
fig2=plt.plot(m,n)
fig2=plt.title('Mean CV Scores vs No of features')
fig2.figure.savefig('fig2.png', bbox_inches='tight')

print(pd.DataFrame.from_dict(sfs.get_metric_dict(confidence_interval=0.90)).T)

# Get the index of minimum cross validation error
idx = np.argmin(n)
print "No of features=",idx
#Get the features indices for the best forward fit and convert to list
b=list(a[idx]['feature_idx'])
# Index the column names. 
# Features from backward fit
print("Features selected in bacward fit")
print(X.columns[b])
##    avg_score ci_bound                                          cv_scores  \
## 1   -42.6185  19.0465  [-23.5582499971, -41.8215743748, -73.993608929...   
## 2   -36.0651  16.3184  [-18.002498199, -40.1507894517, -56.5286659068...   
## 3   -35.4992  13.9619  [-17.2329292677, -44.4178648308, -51.633177846...   
## 4    -33.463  12.4081  [-20.6415333292, -37.3247852146, -47.479302977...   
## 5   -33.1038  10.6156  [-20.2872309863, -34.6367078466, -45.931870352...   
## 6   -32.0638  10.0933  [-19.4463829372, -33.460638577, -42.726257249,...   
## 7   -30.7133  9.23881  [-19.4425181917, -31.1742902259, -40.531266671...   
## 8   -29.7432  9.84468  [-19.445277268, -30.0641187173, -40.2561247122...   
## 9   -29.0878  9.45027  [-19.3545569877, -30.094768669, -39.7506036377...   
## 10  -28.9225  9.39697  [-18.562171585, -29.968504938, -39.9586835965,...   
## 11  -29.4301  10.8831  [-18.3346152225, -30.3312847532, -45.065432793...   
## 12  -30.4589  11.1486  [-18.493389527, -35.0290639374, -45.1558231765...   
## 13  -37.1318  23.2657  [-12.4603005692, -26.0486211062, -33.074137979...   
## 
##                                    feature_idx  std_dev  std_err  
## 1                                        (12,)  18.9042  9.45212  
## 2                                     (10, 12)  16.1965  8.09826  
## 3                                  (10, 12, 7)  13.8576  6.92881  
## 4                               (12, 10, 4, 7)  12.3154  6.15772  
## 5                            (4, 7, 8, 10, 12)  10.5363  5.26816  
## 6                         (4, 7, 8, 9, 10, 12)  10.0179  5.00896  
## 7                      (1, 4, 7, 8, 9, 10, 12)  9.16981  4.58491  
## 8                  (1, 4, 7, 8, 9, 10, 11, 12)  9.77116  4.88558  
## 9               (0, 1, 4, 7, 8, 9, 10, 11, 12)  9.37969  4.68985  
## 10           (0, 1, 4, 6, 7, 8, 9, 10, 11, 12)   9.3268   4.6634  
## 11        (0, 1, 3, 4, 6, 7, 8, 9, 10, 11, 12)  10.8018  5.40092  
## 12     (0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12)  11.0653  5.53265  
## 13  (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)  23.0919   11.546  
## No of features= 9
## Features selected in bacward fit
## Index([u'crimeRate', u'zone', u'NO2', u'distances', u'idxHighways', u'taxRate',
##        u'teacherRatio', u'color', u'status'],
##       dtype='object')

The table above shows the average score, 10 fold CV errors, the features included at every step, std. deviation and std. error

Backward fit in Python indicate that 10 features provide the best fit

1.3c Sequential Floating Forward Selection (SFFS) – Python code

The Sequential Feature search also includes ‘floating’ variants which include or exclude features conditionally, once they were excluded or included. The SFFS can conditionally include features which were excluded from the previous step, if it results in a better fit. This option will tend to a better solution, than plain simple SFS. These variants are included below

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib.pyplot as plt
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from sklearn.linear_model import LinearRegression


df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
lr = LinearRegression()

# Create the floating forward search
sffs = SFS(lr, 
          k_features=(1,13), 
          forward=True,  # Forward
          floating=True,  #Floating
          scoring='neg_mean_squared_error',
          cv=5)

# Fit a model
sffs = sffs.fit(X.as_matrix(), y.as_matrix())
a=sffs.get_metric_dict()
n=[]
o=[]
# Compute mean validation scores
for i in np.arange(1,13):
    n.append(-np.mean(a[i]['cv_scores'])) 
   
    
    
m=np.arange(1,13)


# Plot the cross validation score vs number of features
fig3=plt.plot(m,n)
fig3=plt.title('SFFS:Mean CV Scores vs No of features')
fig3.figure.savefig('fig3.png', bbox_inches='tight')

print(pd.DataFrame.from_dict(sffs.get_metric_dict(confidence_interval=0.90)).T)
# Get the index of the minimum CV score
idx = np.argmin(n)
print "No of features=",idx
#Get the features indices for the best forward floating fit and convert to list
b=list(a[idx]['feature_idx'])
print(b)

print("#################################################################################")
# Index the column names. 
# Features from forward fit
print("Features selected in forward fit")
print(X.columns[b])
##    avg_score ci_bound                                          cv_scores  \
## 1   -42.6185  19.0465  [-23.5582499971, -41.8215743748, -73.993608929...   
## 2   -36.0651  16.3184  [-18.002498199, -40.1507894517, -56.5286659068...   
## 3   -34.1001    20.87  [-9.43012884381, -25.9584955394, -36.184188174...   
## 4   -33.7681  20.1638  [-8.86076528781, -28.650217633, -35.7246353855...   
## 5   -33.6392  20.5271  [-8.90807628524, -28.0684679108, -35.827463022...   
## 6   -33.6276  19.0859  [-9.549485942, -30.9724602876, -32.6689523347,...   
## 7   -32.1834  12.1001  [-17.9491036167, -39.6479234651, -45.470227740...   
## 8   -32.0908  11.8179  [-17.4389015788, -41.2453629843, -44.247557798...   
## 9   -31.0671  10.1581  [-17.2689542913, -37.4379370429, -41.366372300...   
## 10  -28.9225  9.39697  [-18.562171585, -29.968504938, -39.9586835965,...   
## 11  -29.4301  10.8831  [-18.3346152225, -30.3312847532, -45.065432793...   
## 12  -30.4589  11.1486  [-18.493389527, -35.0290639374, -45.1558231765...   
## 13  -37.1318  23.2657  [-12.4603005692, -26.0486211062, -33.074137979...   
## 
##                                    feature_idx  std_dev  std_err  
## 1                                        (12,)  18.9042  9.45212  
## 2                                     (10, 12)  16.1965  8.09826  
## 3                                  (10, 12, 5)  20.7142  10.3571  
## 4                               (10, 3, 12, 5)  20.0132  10.0066  
## 5                            (0, 10, 3, 12, 5)  20.3738  10.1869  
## 6                         (0, 3, 5, 7, 10, 12)  18.9433  9.47167  
## 7                      (0, 1, 2, 3, 7, 10, 12)  12.0097  6.00487  
## 8                   (0, 1, 2, 3, 7, 8, 10, 12)  11.7297  5.86484  
## 9                (0, 1, 2, 3, 7, 8, 9, 10, 12)  10.0822  5.04111  
## 10           (0, 1, 4, 6, 7, 8, 9, 10, 11, 12)   9.3268   4.6634  
## 11        (0, 1, 3, 4, 6, 7, 8, 9, 10, 11, 12)  10.8018  5.40092  
## 12     (0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12)  11.0653  5.53265  
## 13  (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)  23.0919   11.546  
## No of features= 9
## [0, 1, 2, 3, 7, 8, 9, 10, 12]
## #################################################################################
## Features selected in forward fit
## Index([u'crimeRate', u'zone', u'indus', u'chasRiver', u'distances',
##        u'idxHighways', u'taxRate', u'teacherRatio', u'status'],
##       dtype='object')

The table above shows the average score, 10 fold CV errors, the features included at every step, std. deviation and std. error

SFFS provides the best fit with 10 predictors

1.3d Sequential Floating Backward Selection (SFBS) – Python code

The SFBS is an extension of the SBS. Here features that are excluded at any stage can be conditionally included if the resulting feature set gives a better fit.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib.pyplot as plt
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from sklearn.linear_model import LinearRegression


df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
lr = LinearRegression()

sffs = SFS(lr, 
          k_features=(1,13), 
          forward=False, # Backward
          floating=True, # Floating
          scoring='neg_mean_squared_error',
          cv=5)

sffs = sffs.fit(X.as_matrix(), y.as_matrix())
a=sffs.get_metric_dict()
n=[]
o=[]
# Compute the mean cross validation score
for i in np.arange(1,13):
    n.append(-np.mean(a[i]['cv_scores']))  
    
m=np.arange(1,13)

fig4=plt.plot(m,n)
fig4=plt.title('SFBS: Mean CV Scores vs No of features')
fig4.figure.savefig('fig4.png', bbox_inches='tight')

print(pd.DataFrame.from_dict(sffs.get_metric_dict(confidence_interval=0.90)).T)

# Get the index of the minimum CV score
idx = np.argmin(n)
print "No of features=",idx
#Get the features indices for the best backward floating fit and convert to list
b=list(a[idx]['feature_idx'])
print(b)

print("#################################################################################")
# Index the column names. 
# Features from forward fit
print("Features selected in backward floating fit")
print(X.columns[b])
##    avg_score ci_bound                                          cv_scores  \
## 1   -42.6185  19.0465  [-23.5582499971, -41.8215743748, -73.993608929...   
## 2   -36.0651  16.3184  [-18.002498199, -40.1507894517, -56.5286659068...   
## 3   -34.1001    20.87  [-9.43012884381, -25.9584955394, -36.184188174...   
## 4    -33.463  12.4081  [-20.6415333292, -37.3247852146, -47.479302977...   
## 5   -32.3699  11.2725  [-20.8771078371, -34.9825657934, -45.813447203...   
## 6   -31.6742  11.2458  [-20.3082500364, -33.2288990522, -45.535507868...   
## 7   -30.7133  9.23881  [-19.4425181917, -31.1742902259, -40.531266671...   
## 8   -29.7432  9.84468  [-19.445277268, -30.0641187173, -40.2561247122...   
## 9   -29.0878  9.45027  [-19.3545569877, -30.094768669, -39.7506036377...   
## 10  -28.9225  9.39697  [-18.562171585, -29.968504938, -39.9586835965,...   
## 11  -29.4301  10.8831  [-18.3346152225, -30.3312847532, -45.065432793...   
## 12  -30.4589  11.1486  [-18.493389527, -35.0290639374, -45.1558231765...   
## 13  -37.1318  23.2657  [-12.4603005692, -26.0486211062, -33.074137979...   
## 
##                                    feature_idx  std_dev  std_err  
## 1                                        (12,)  18.9042  9.45212  
## 2                                     (10, 12)  16.1965  8.09826  
## 3                                  (10, 12, 5)  20.7142  10.3571  
## 4                               (4, 10, 7, 12)  12.3154  6.15772  
## 5                            (12, 10, 4, 1, 7)  11.1883  5.59417  
## 6                        (4, 7, 8, 10, 11, 12)  11.1618  5.58088  
## 7                      (1, 4, 7, 8, 9, 10, 12)  9.16981  4.58491  
## 8                  (1, 4, 7, 8, 9, 10, 11, 12)  9.77116  4.88558  
## 9               (0, 1, 4, 7, 8, 9, 10, 11, 12)  9.37969  4.68985  
## 10           (0, 1, 4, 6, 7, 8, 9, 10, 11, 12)   9.3268   4.6634  
## 11        (0, 1, 3, 4, 6, 7, 8, 9, 10, 11, 12)  10.8018  5.40092  
## 12     (0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12)  11.0653  5.53265  
## 13  (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)  23.0919   11.546  
## No of features= 9
## [0, 1, 4, 7, 8, 9, 10, 11, 12]
## #################################################################################
## Features selected in backward floating fit
## Index([u'crimeRate', u'zone', u'NO2', u'distances', u'idxHighways', u'taxRate',
##        u'teacherRatio', u'color', u'status'],
##       dtype='object')

The table above shows the average score, 10 fold CV errors, the features included at every step, std. deviation and std. error

SFBS indicates that 10 features are needed for the best fit

1.4 Ridge regression

In Linear Regression the Residual Sum of Squares (RSS) is given as

RSS = \sum_{i=1}^{n} (y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_jx_{ij})^{2}
Ridge regularization =\sum_{i=1}^{n} (y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_jx_{ij})^{2} + \lambda \sum_{j=1}^{p}\beta^{2}

where is the regularization or tuning parameter. Increasing increases the penalty on the coefficients thus shrinking them. However in Ridge Regression features that do not influence the target variable will shrink closer to zero but never become zero except for very large values of

Ridge regression in R requires the ‘glmnet’ package

1.4a Ridge Regression – R code

library(glmnet)
library(dplyr)
# Read the data
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
#Rename the columns
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
              "distances","highways","tax","teacherRatio","color","status","cost")
# Select specific columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
                            "distances","highways","tax","teacherRatio","color","status","cost")

# Set X and y as matrices
X=as.matrix(df1[,1:13])
y=df1$cost

# Fit a Ridge model
fitRidge <-glmnet(X,y,alpha=0)

#Plot the model where the coefficient shrinkage is plotted vs log lambda
plot(fitRidge,xvar="lambda",label=TRUE,main= "Ridge regression coefficient shrikage vs log lambda")

The plot below shows how the 13 coefficients for the 13 predictors vary when lambda is increased. The x-axis includes log (lambda). We can see that increasing lambda from 10^{2} to 10^{6} significantly shrinks the coefficients. We can draw a vertical line from the x-axis and read the values of the 13 coefficients. Some of them will be close to zero

# Compute the cross validation error
cvRidge=cv.glmnet(X,y,alpha=0)

#Plot the cross validation error
plot(cvRidge, main="Ridge regression Cross Validation Error (10 fold)")

This gives the 10 fold Cross Validation  Error with respect to log (lambda) As lambda increase the MSE increases

1.4a Ridge Regression – Python code

The coefficient shrinkage for Python can be plotted like R using Least Angle Regression model a.k.a. LARS package. This is included below

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split


df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()

from sklearn.linear_model import Ridge
X_train, X_test, y_train, y_test = train_test_split(X, y,
                                                   random_state = 0)

# Scale the X_train and X_test
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Fit a ridge regression with alpha=20
linridge = Ridge(alpha=20.0).fit(X_train_scaled, y_train)

# Print the training R squared
print('R-squared score (training): {:.3f}'
     .format(linridge.score(X_train_scaled, y_train)))
# Print the test Rsquared
print('R-squared score (test): {:.3f}'
     .format(linridge.score(X_test_scaled, y_test)))
print('Number of non-zero features: {}'
     .format(np.sum(linridge.coef_ != 0)))

trainingRsquared=[]
testRsquared=[]
# Plot the effect of alpha on the test Rsquared
print('Ridge regression: effect of alpha regularization parameter\n')
# Choose a list of alpha values
for this_alpha in [0.001,.01,.1,0, 1, 10, 20, 50, 100, 1000]:
    linridge = Ridge(alpha = this_alpha).fit(X_train_scaled, y_train)
    # Compute training rsquared
    r2_train = linridge.score(X_train_scaled, y_train)
    # Compute test rsqaured
    r2_test = linridge.score(X_test_scaled, y_test)
    num_coeff_bigger = np.sum(abs(linridge.coef_) > 1.0)
    trainingRsquared.append(r2_train)
    testRsquared.append(r2_test)
    
# Create a dataframe
alpha=[0.001,.01,.1,0, 1, 10, 20, 50, 100, 1000]    
trainingRsquared=pd.DataFrame(trainingRsquared,index=alpha)
testRsquared=pd.DataFrame(testRsquared,index=alpha)

# Plot training and test R squared as a function of alpha
df3=pd.concat([trainingRsquared,testRsquared],axis=1)
df3.columns=['trainingRsquared','testRsquared']

fig5=df3.plot()
fig5=plt.title('Ridge training and test squared error vs Alpha')
fig5.figure.savefig('fig5.png', bbox_inches='tight')

# Plot the coefficient shrinage using the LARS package

from sklearn import linear_model
# #############################################################################
# Compute paths

n_alphas = 200
alphas = np.logspace(0, 8, n_alphas)

coefs = []
for a in alphas:
    ridge = linear_model.Ridge(alpha=a, fit_intercept=False)
    ridge.fit(X_train_scaled, y_train)
    coefs.append(ridge.coef_)

# #############################################################################
# Display results

ax = plt.gca()

fig6=ax.plot(alphas, coefs)
fig6=ax.set_xscale('log')
fig6=ax.set_xlim(ax.get_xlim()[::-1])  # reverse axis
fig6=plt.xlabel('alpha')
fig6=plt.ylabel('weights')
fig6=plt.title('Ridge coefficients as a function of the regularization')
fig6=plt.axis('tight')
plt.savefig('fig6.png', bbox_inches='tight')
## R-squared score (training): 0.620
## R-squared score (test): 0.438
## Number of non-zero features: 13
## Ridge regression: effect of alpha regularization parameter

The plot below shows the training and test error when increasing the tuning or regularization parameter ‘alpha’

For Python the coefficient shrinkage with LARS must be viewed from right to left, where you have increasing alpha. As alpha increases the coefficients shrink to 0.

1.5 Lasso regularization

The Lasso is another form of regularization, also known as L1 regularization. Unlike the Ridge Regression where the coefficients of features which do not influence the target tend to zero, in the lasso regualrization the coefficients become 0. The general form of Lasso is as follows

\sum_{i=1}^{n} (y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_jx_{ij})^{2} + \lambda \sum_{j=1}^{p}|\beta|

1.5a Lasso regularization – R code

library(glmnet)
library(dplyr)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
              "distances","highways","tax","teacherRatio","color","status","cost")
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
                            "distances","highways","tax","teacherRatio","color","status","cost")

# Set X and y as matrices
X=as.matrix(df1[,1:13])
y=df1$cost

# Fit the lasso model
fitLasso <- glmnet(X,y)
# Plot the coefficient shrinkage as a function of log(lambda)
plot(fitLasso,xvar="lambda",label=TRUE,main="Lasso regularization - Coefficient shrinkage vs log lambda")

The plot below shows that in L1 regularization the coefficients actually become zero with increasing lambda

# Compute the cross validation error (10 fold)
cvLasso=cv.glmnet(X,y,alpha=0)
# Plot the cross validation error
plot(cvLasso)

This gives the MSE for the lasso model

1.5 b Lasso regularization – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso
from sklearn.preprocessing import MinMaxScaler
from sklearn import linear_model

scaler = MinMaxScaler()
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
X_train, X_test, y_train, y_test = train_test_split(X, y,
                                                   random_state = 0)

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

linlasso = Lasso(alpha=0.1, max_iter = 10).fit(X_train_scaled, y_train)

print('Non-zero features: {}'
     .format(np.sum(linlasso.coef_ != 0)))
print('R-squared score (training): {:.3f}'
     .format(linlasso.score(X_train_scaled, y_train)))
print('R-squared score (test): {:.3f}\n'
     .format(linlasso.score(X_test_scaled, y_test)))
print('Features with non-zero weight (sorted by absolute magnitude):')

for e in sorted (list(zip(list(X), linlasso.coef_)),
                key = lambda e: -abs(e[1])):
    if e[1] != 0:
        print('\t{}, {:.3f}'.format(e[0], e[1]))
        

print('Lasso regression: effect of alpha regularization\n\
parameter on number of features kept in final model\n')

trainingRsquared=[]
testRsquared=[]
#for alpha in [0.01,0.05,0.1, 1, 2, 3, 5, 10, 20, 50]:
for alpha in [0.01,0.07,0.05, 0.1, 1,2, 3, 5, 10]:
    linlasso = Lasso(alpha, max_iter = 10000).fit(X_train_scaled, y_train)
    r2_train = linlasso.score(X_train_scaled, y_train)
    r2_test = linlasso.score(X_test_scaled, y_test)
    trainingRsquared.append(r2_train)
    testRsquared.append(r2_test)
    
alpha=[0.01,0.07,0.05, 0.1, 1,2, 3, 5, 10]    
#alpha=[0.01,0.05,0.1, 1, 2, 3, 5, 10, 20, 50]
trainingRsquared=pd.DataFrame(trainingRsquared,index=alpha)
testRsquared=pd.DataFrame(testRsquared,index=alpha)

df3=pd.concat([trainingRsquared,testRsquared],axis=1)
df3.columns=['trainingRsquared','testRsquared']

fig7=df3.plot()
fig7=plt.title('LASSO training and test squared error vs Alpha')
fig7.figure.savefig('fig7.png', bbox_inches='tight')



## Non-zero features: 7
## R-squared score (training): 0.726
## R-squared score (test): 0.561
## 
## Features with non-zero weight (sorted by absolute magnitude):
##  status, -18.361
##  rooms, 18.232
##  teacherRatio, -8.628
##  taxRate, -2.045
##  color, 1.888
##  chasRiver, 1.670
##  distances, -0.529
## Lasso regression: effect of alpha regularization
## parameter on number of features kept in final model
## 
## Computing regularization path using the LARS ...
## .C:\Users\Ganesh\ANACON~1\lib\site-packages\sklearn\linear_model\coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
##   ConvergenceWarning)

The plot below gives the training and test R squared error

1.5c Lasso coefficient shrinkage – Python code

To plot the coefficient shrinkage for Lasso the Least Angle Regression model a.k.a. LARS package. This is shown below

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso
from sklearn.preprocessing import MinMaxScaler
from sklearn import linear_model
scaler = MinMaxScaler()
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
              "distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
X_train, X_test, y_train, y_test = train_test_split(X, y,
                                                   random_state = 0)

X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)


print("Computing regularization path using the LARS ...")
alphas, _, coefs = linear_model.lars_path(X_train_scaled, y_train, method='lasso', verbose=True)

xx = np.sum(np.abs(coefs.T), axis=1)
xx /= xx[-1]

fig8=plt.plot(xx, coefs.T)

ymin, ymax = plt.ylim()
fig8=plt.vlines(xx, ymin, ymax, linestyle='dashed')
fig8=plt.xlabel('|coef| / max|coef|')
fig8=plt.ylabel('Coefficients')
fig8=plt.title('LASSO Path - Coefficient Shrinkage vs L1')
fig8=plt.axis('tight')
plt.savefig('fig8.png', bbox_inches='tight')
This plot show the coefficient shrinkage for lasso.
This 3rd part of the series covers the main ‘feature selection’ methods. I hope these posts serve as a quick and useful reference to ML code both for R and Python!
Stay tuned for further updates to this series!
Watch this space!

 

You may also like

1. Natural language processing: What would Shakespeare say?
2. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
3. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
4. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
5. Experiments with deblurring using OpenCV
6. R vs Python: Different similarities and similar differences

To see all posts see Index of posts

Practical Machine Learning with R and Python – Part 2


In this 2nd part of the series “Practical Machine Learning with R and Python – Part 2”, I continue where I left off in my first post Practical Machine Learning with R and Python – Part 2. In this post I cover the some classification algorithmns and cross validation. Specifically I touch
-Logistic Regression
-K Nearest Neighbors (KNN) classification
-Leave out one Cross Validation (LOOCV)
-K Fold Cross Validation
in both R and Python.

As in my initial post the algorithms are based on the following courses.

You can download this R Markdown file along with the data from Github. I hope these posts can be used as a quick reference in R and Python and Machine Learning.I have tried to include the coolest part of either course in this post.

The content of this post and much more is now available as a compact book  on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo

The following classification problem is based on Logistic Regression. The data is an included data set in Scikit-Learn, which I have saved as csv and use it also for R. The fit of a classification Machine Learning Model depends on how correctly classifies the data. There are several measures of testing a model’s classification performance. They are

Accuracy = TP + TN / (TP + TN + FP + FN) – Fraction of all classes correctly classified
Precision = TP / (TP + FP) – Fraction of correctly classified positives among those classified as positive
Recall = TP / (TP + FN) Also known as sensitivity, or True Positive Rate (True positive) – Fraction of correctly classified as positive among all positives in the data
F1 = 2 * Precision * Recall / (Precision + Recall)

1a. Logistic Regression – R code

The caret and e1071 package is required for using the confusionMatrix call

source("RFunctions.R")
library(dplyr)
library(caret)
library(e1071)
# Read the data (from sklearn)
cancer <- read.csv("cancer.csv")
# Rename the target variable
names(cancer) <- c(seq(1,30),"output")
# Split as training and test sets
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

# Fit a generalized linear logistic model, 
fit=glm(output~.,family=binomial,data=train,control = list(maxit = 50))
# Predict the output from the model
a=predict(fit,newdata=train,type="response")
# Set response >0.5 as 1 and <=0.5 as 0
b=ifelse(a>0.5,1,0)
# Compute the confusion matrix for training data
confusionMatrix(b,train$output)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction   0   1
##          0 154   0
##          1   0 272
##                                      
##                Accuracy : 1          
##                  95% CI : (0.9914, 1)
##     No Information Rate : 0.6385     
##     P-Value [Acc > NIR] : < 2.2e-16  
##                                      
##                   Kappa : 1          
##  Mcnemar's Test P-Value : NA         
##                                      
##             Sensitivity : 1.0000     
##             Specificity : 1.0000     
##          Pos Pred Value : 1.0000     
##          Neg Pred Value : 1.0000     
##              Prevalence : 0.3615     
##          Detection Rate : 0.3615     
##    Detection Prevalence : 0.3615     
##       Balanced Accuracy : 1.0000     
##                                      
##        'Positive' Class : 0          
## 
m=predict(fit,newdata=test,type="response")
n=ifelse(m>0.5,1,0)
# Compute the confusion matrix for test output
confusionMatrix(n,test$output)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  0  1
##          0 52  4
##          1  5 81
##                                           
##                Accuracy : 0.9366          
##                  95% CI : (0.8831, 0.9706)
##     No Information Rate : 0.5986          
##     P-Value [Acc > NIR] : <2e-16          
##                                           
##                   Kappa : 0.8677          
##  Mcnemar's Test P-Value : 1               
##                                           
##             Sensitivity : 0.9123          
##             Specificity : 0.9529          
##          Pos Pred Value : 0.9286          
##          Neg Pred Value : 0.9419          
##              Prevalence : 0.4014          
##          Detection Rate : 0.3662          
##    Detection Prevalence : 0.3944          
##       Balanced Accuracy : 0.9326          
##                                           
##        'Positive' Class : 0               
## 

1b. Logistic Regression – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
os.chdir("C:\\Users\\Ganesh\\RandPython")
from sklearn.datasets import make_classification, make_blobs

from sklearn.metrics import confusion_matrix
from matplotlib.colors import ListedColormap
from sklearn.datasets import load_breast_cancer
# Load the cancer data
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
                                                   random_state = 0)
# Call the Logisitic Regression function
clf = LogisticRegression().fit(X_train, y_train)
fig, subaxes = plt.subplots(1, 1, figsize=(7, 5))
# Fit a model
clf = LogisticRegression().fit(X_train, y_train)

# Compute and print the Accuray scores
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
     .format(clf.score(X_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
     .format(clf.score(X_test, y_test)))
y_predicted=clf.predict(X_test)
# Compute and print confusion matrix
confusion = confusion_matrix(y_test, y_predicted)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(y_test, y_predicted)))
print('Precision: {:.2f}'.format(precision_score(y_test, y_predicted)))
print('Recall: {:.2f}'.format(recall_score(y_test, y_predicted)))
print('F1: {:.2f}'.format(f1_score(y_test, y_predicted)))
## Accuracy of Logistic regression classifier on training set: 0.96
## Accuracy of Logistic regression classifier on test set: 0.96
## Accuracy: 0.96
## Precision: 0.99
## Recall: 0.94
## F1: 0.97

2. Dummy variables

The following R and Python code show how dummy variables are handled in R and Python. Dummy variables are categorival variables which have to be converted into appropriate values before using them in Machine Learning Model For e.g. if we had currency as ‘dollar’, ‘rupee’ and ‘yen’ then the dummy variable will convert this as
dollar 0 0 0
rupee 0 0 1
yen 0 1 0

2a. Logistic Regression with dummy variables- R code

# Load the dummies library
library(dummies) 
df <- read.csv("adult1.csv",stringsAsFactors = FALSE,na.strings = c(""," "," ?"))

# Remove rows which have NA
df1 <- df[complete.cases(df),]
dim(df1)
## [1] 30161    16
# Select specific columns
adult <- df1 %>% dplyr::select(age,occupation,education,educationNum,capitalGain,
                               capital.loss,hours.per.week,native.country,salary)
# Set the dummy data with appropriate values
adult1 <- dummy.data.frame(adult, sep = ".")

#Split as training and test
train_idx <- trainTestSplit(adult1,trainPercent=75,seed=1111)
train <- adult1[train_idx, ]
test <- adult1[-train_idx, ]

# Fit a binomial logistic regression
fit=glm(salary~.,family=binomial,data=train)
# Predict response
a=predict(fit,newdata=train,type="response")
# If response >0.5 then it is a 1 and 0 otherwise
b=ifelse(a>0.5,1,0)
confusionMatrix(b,train$salary)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction     0     1
##          0 16065  3145
##          1   968  2442
##                                           
##                Accuracy : 0.8182          
##                  95% CI : (0.8131, 0.8232)
##     No Information Rate : 0.753           
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.4375          
##  Mcnemar's Test P-Value : < 2.2e-16       
##                                           
##             Sensitivity : 0.9432          
##             Specificity : 0.4371          
##          Pos Pred Value : 0.8363          
##          Neg Pred Value : 0.7161          
##              Prevalence : 0.7530          
##          Detection Rate : 0.7102          
##    Detection Prevalence : 0.8492          
##       Balanced Accuracy : 0.6901          
##                                           
##        'Positive' Class : 0               
## 
# Compute and display confusion matrix
m=predict(fit,newdata=test,type="response")
## Warning in predict.lm(object, newdata, se.fit, scale = 1, type =
## ifelse(type == : prediction from a rank-deficient fit may be misleading
n=ifelse(m>0.5,1,0)
confusionMatrix(n,test$salary)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    0    1
##          0 5263 1099
##          1  357  822
##                                           
##                Accuracy : 0.8069          
##                  95% CI : (0.7978, 0.8158)
##     No Information Rate : 0.7453          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.4174          
##  Mcnemar's Test P-Value : < 2.2e-16       
##                                           
##             Sensitivity : 0.9365          
##             Specificity : 0.4279          
##          Pos Pred Value : 0.8273          
##          Neg Pred Value : 0.6972          
##              Prevalence : 0.7453          
##          Detection Rate : 0.6979          
##    Detection Prevalence : 0.8437          
##       Balanced Accuracy : 0.6822          
##                                           
##        'Positive' Class : 0               
## 

2b. Logistic Regression with dummy variables- Python code

Pandas has a get_dummies function for handling dummies

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
# Read data
df =pd.read_csv("adult1.csv",encoding="ISO-8859-1",na_values=[""," "," ?"])
# Drop rows with NA
df1=df.dropna()
print(df1.shape)
# Select specific columns
adult = df1[['age','occupation','education','educationNum','capitalGain','capital-loss', 
             'hours-per-week','native-country','salary']]

X=adult[['age','occupation','education','educationNum','capitalGain','capital-loss', 
             'hours-per-week','native-country']]
# Set approporiate values for dummy variables
X_adult=pd.get_dummies(X,columns=['occupation','education','native-country'])
y=adult['salary']

X_adult_train, X_adult_test, y_train, y_test = train_test_split(X_adult, y,
                                                   random_state = 0)
clf = LogisticRegression().fit(X_adult_train, y_train)

# Compute and display Accuracy and Confusion matrix
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
     .format(clf.score(X_adult_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
     .format(clf.score(X_adult_test, y_test)))
y_predicted=clf.predict(X_adult_test)
confusion = confusion_matrix(y_test, y_predicted)
print('Accuracy: {:.2f}'.format(accuracy_score(y_test, y_predicted)))
print('Precision: {:.2f}'.format(precision_score(y_test, y_predicted)))
print('Recall: {:.2f}'.format(recall_score(y_test, y_predicted)))
print('F1: {:.2f}'.format(f1_score(y_test, y_predicted)))
## (30161, 16)
## Accuracy of Logistic regression classifier on training set: 0.82
## Accuracy of Logistic regression classifier on test set: 0.81
## Accuracy: 0.81
## Precision: 0.68
## Recall: 0.41
## F1: 0.51

3a – K Nearest Neighbors Classification – R code

The Adult data set is taken from UCI Machine Learning Repository

source("RFunctions.R")
df <- read.csv("adult1.csv",stringsAsFactors = FALSE,na.strings = c(""," "," ?"))
# Remove rows which have NA
df1 <- df[complete.cases(df),]
dim(df1)
## [1] 30161    16
# Select specific columns
adult <- df1 %>% dplyr::select(age,occupation,education,educationNum,capitalGain,
                               capital.loss,hours.per.week,native.country,salary)
# Set dummy variables
adult1 <- dummy.data.frame(adult, sep = ".")

#Split train and test as required by KNN classsification model
train_idx <- trainTestSplit(adult1,trainPercent=75,seed=1111)
train <- adult1[train_idx, ]
test <- adult1[-train_idx, ]
train.X <- train[,1:76]
train.y <- train[,77]
test.X <- test[,1:76]
test.y <- test[,77]

# Fit a model for 1,3,5,10 and 15 neighbors
cMat <- NULL
neighbors <-c(1,3,5,10,15)
for(i in seq_along(neighbors)){
    fit =knn(train.X,test.X,train.y,k=i)
    table(fit,test.y)
    a<-confusionMatrix(fit,test.y)
    cMat[i] <- a$overall[1]
    print(a$overall[1])
}
##  Accuracy 
## 0.7835831 
##  Accuracy 
## 0.8162047 
##  Accuracy 
## 0.8089113 
##  Accuracy 
## 0.8209787 
##  Accuracy 
## 0.8184591
#Plot the Accuracy for each of the KNN models
df <- data.frame(neighbors,Accuracy=cMat)
ggplot(df,aes(x=neighbors,y=Accuracy)) + geom_point() +geom_line(color="blue") +
    xlab("Number of neighbors") + ylab("Accuracy") +
    ggtitle("KNN regression - Accuracy vs Number of Neighors (Unnormalized)")

3b – K Nearest Neighbors Classification – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import MinMaxScaler

# Read data
df =pd.read_csv("adult1.csv",encoding="ISO-8859-1",na_values=[""," "," ?"])
df1=df.dropna()
print(df1.shape)
# Select specific columns
adult = df1[['age','occupation','education','educationNum','capitalGain','capital-loss', 
             'hours-per-week','native-country','salary']]

X=adult[['age','occupation','education','educationNum','capitalGain','capital-loss', 
             'hours-per-week','native-country']]
             
#Set values for dummy variables
X_adult=pd.get_dummies(X,columns=['occupation','education','native-country'])
y=adult['salary']

X_adult_train, X_adult_test, y_train, y_test = train_test_split(X_adult, y,
                                                   random_state = 0)
                                                   
# KNN classification in Python requires the data to be scaled. 
# Scale the data
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_adult_train)
# Apply scaling to test set also
X_test_scaled = scaler.transform(X_adult_test)
# Compute the KNN model for 1,3,5,10 & 15 neighbors
accuracy=[]
neighbors=[1,3,5,10,15]
for i in neighbors:
    knn = KNeighborsClassifier(n_neighbors = i)
    knn.fit(X_train_scaled, y_train)
    accuracy.append(knn.score(X_test_scaled, y_test))
    print('Accuracy test score: {:.3f}'
        .format(knn.score(X_test_scaled, y_test)))

# Plot the models with the Accuracy attained for each of these models    
fig1=plt.plot(neighbors,accuracy)
fig1=plt.title("KNN regression - Accuracy vs Number of neighbors")
fig1=plt.xlabel("Neighbors")
fig1=plt.ylabel("Accuracy")
fig1.figure.savefig('foo1.png', bbox_inches='tight')
## (30161, 16)
## Accuracy test score: 0.749
## Accuracy test score: 0.779
## Accuracy test score: 0.793
## Accuracy test score: 0.804
## Accuracy test score: 0.803

Output image:

4 MPG vs Horsepower

The following scatter plot shows the non-linear relation between mpg and horsepower. This will be used as the data input for computing K Fold Cross Validation Error

4a MPG vs Horsepower scatter plot – R Code

df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]
ggplot(df3,aes(x=horsepower,y=mpg)) + geom_point() + xlab("Horsepower") + 
    ylab("Miles Per gallon") + ggtitle("Miles per Gallon vs Hosrsepower")

4b MPG vs Horsepower scatter plot – Python Code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
autoDF3=autoDF2.dropna()
autoDF3.shape
#X=autoDF3[['cylinder','displacement','horsepower','weight']]
X=autoDF3[['horsepower']]
y=autoDF3['mpg']

fig11=plt.scatter(X,y)
fig11=plt.title("KNN regression - Accuracy vs Number of neighbors")
fig11=plt.xlabel("Neighbors")
fig11=plt.ylabel("Accuracy")
fig11.figure.savefig('foo11.png', bbox_inches='tight')

5 K Fold Cross Validation

K Fold Cross Validation is a technique in which the data set is divided into K Folds or K partitions. The Machine Learning model is trained on K-1 folds and tested on the Kth fold i.e.
we will have K-1 folds for training data and 1 for testing the ML model. Since we can partition this as C_{1}^{K} or K choose 1, there will be K such partitions. The K Fold Cross
Validation estimates the average validation error that we can expect on a new unseen test data.

The formula for K Fold Cross validation is as follows

MSE_{K} = \frac{\sum (y-yhat)^{2}}{n_{K}}
and
n_{K} = \frac{N}{K}
and
CV_{K} = \sum_{K=1}^{K} (\frac{n_{K}}{N}) MSE_{K}

where n_{K} is the number of elements in partition ‘K’ and N is the total number of elements
CV_{K} =\sum_{K=1}^{K} MSE_{K}

CV_{K} =\frac{\sum_{K=1}^{K} MSE_{K}}{K}
Leave Out one Cross Validation (LOOCV) is a special case of K Fold Cross Validation where N-1 data points are used to train the model and 1 data point is used to test the model. There are N such paritions of N-1 & 1 that are possible. The mean error is measured The Cross Valifation Error for LOOCV is

CV_{N} = \frac{1}{n} *\frac{\sum_{1}^{n}(y-yhat)^{2}}{1-h_{i}}
where h_{i} is the diagonal hat matrix

see [Statistical Learning]

The above formula is also included in this blog post

It took me a day and a half to implement the K Fold Cross Validation formula. I think it is correct. In any case do let me know if you think it is off

5a. Leave out one cross validation (LOOCV) – R Code

R uses the package ‘boot’ for performing Cross Validation error computation

library(boot)
library(reshape2)
# Read data
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
# Select complete cases
df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]
set.seed(17)
cv.error=rep(0,10)
# For polynomials 1,2,3... 10 fit a LOOCV model
for (i in 1:10){
    glm.fit=glm(mpg~poly(horsepower,i),data=df3)
    cv.error[i]=cv.glm(df3,glm.fit)$delta[1]
    
}
cv.error
##  [1] 24.23151 19.24821 19.33498 19.42443 19.03321 18.97864 18.83305
##  [8] 18.96115 19.06863 19.49093
# Create and display a plot
folds <- seq(1,10)
df <- data.frame(folds,cvError=cv.error)
ggplot(df,aes(x=folds,y=cvError)) + geom_point() +geom_line(color="blue") +
    xlab("Degree of Polynomial") + ylab("Cross Validation Error") +
    ggtitle("Leave one out Cross Validation - Cross Validation Error vs Degree of Polynomial")

5b. Leave out one cross validation (LOOCV) – Python Code

In Python there is no available function to compute Cross Validation error and we have to compute the above formula. I have done this after several hours. I think it is now in reasonable shape. Do let me know if you think otherwise. For LOOCV I use the K Fold Cross Validation with K=N

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split, KFold
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error
# Read data
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
# Remove rows with NAs
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['horsepower']]
y=autoDF3['mpg']

# For polynomial degree 1,2,3... 10
def computeCVError(X,y,folds):
    deg=[]
    mse=[]
    degree1=[1,2,3,4,5,6,7,8,9,10]
    
    nK=len(X)/float(folds)
    xval_err=0
    # For degree 'j'
    for j in degree1: 
        # Split as 'folds'
        kf = KFold(len(X),n_folds=folds)
        for train_index, test_index in kf:
            # Create the appropriate train and test partitions from the fold index
            X_train, X_test = X.iloc[train_index], X.iloc[test_index]
            y_train, y_test = y.iloc[train_index], y.iloc[test_index]  

            # For the polynomial degree 'j'
            poly = PolynomialFeatures(degree=j)        
            # Transform the X_train and X_test
            X_train_poly = poly.fit_transform(X_train)
            X_test_poly = poly.fit_transform(X_test)
            # Fit a model on the transformed data
            linreg = LinearRegression().fit(X_train_poly, y_train)
            # Compute yhat or ypred
            y_pred = linreg.predict(X_test_poly)   
            # Compute MSE * n_K/N
            test_mse = mean_squared_error(y_test, y_pred)*float(len(X_train))/float(len(X))     
            # Add the test_mse for this partition of the data
            mse.append(test_mse)
        # Compute the mean of all folds for degree 'j'   
        deg.append(np.mean(mse))
        
    return(deg)


df=pd.DataFrame()
print(len(X))
# Call the function once. For LOOCV K=N. hence len(X) is passed as number of folds
cvError=computeCVError(X,y,len(X))

# Create and plot LOOCV
df=pd.DataFrame(cvError)
fig3=df.plot()
fig3=plt.title("Leave one out Cross Validation - Cross Validation Error vs Degree of Polynomial")
fig3=plt.xlabel("Degree of Polynomial")
fig3=plt.ylabel("Cross validation Error")
fig3.figure.savefig('foo3.png', bbox_inches='tight')

 

6a K Fold Cross Validation – R code

Here K Fold Cross Validation is done for 4, 5 and 10 folds using the R package boot and the glm package

library(boot)
library(reshape2)
set.seed(17)
#Read data
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]
a=matrix(rep(0,30),nrow=3,ncol=10)
set.seed(17)
# Set the folds as 4,5 and 10
folds<-c(4,5,10)
for(i in seq_along(folds)){
    cv.error.10=rep(0,10)
    for (j in 1:10){
        # Fit a generalized linear model
        glm.fit=glm(mpg~poly(horsepower,j),data=df3)
        # Compute K Fold Validation error
        a[i,j]=cv.glm(df3,glm.fit,K=folds[i])$delta[1]
        
    }
    
}

# Create and display the K Fold Cross Validation Error
b <- t(a)
df <- data.frame(b)
df1 <- cbind(seq(1,10),df)
names(df1) <- c("PolynomialDegree","4-fold","5-fold","10-fold")

df2 <- melt(df1,id="PolynomialDegree")
ggplot(df2) + geom_line(aes(x=PolynomialDegree, y=value, colour=variable),size=2) +
    xlab("Degree of Polynomial") + ylab("Cross Validation Error") +
    ggtitle("K Fold Cross Validation - Cross Validation Error vs Degree of Polynomial")

6b. K Fold Cross Validation – Python code

The implementation of K-Fold Cross Validation Error has to be implemented and I have done this below. There is a small discrepancy in the shapes of the curves with the R plot above. Not sure why!

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split, KFold
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error
# Read data
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
# Drop NA rows
autoDF3=autoDF2.dropna()
autoDF3.shape
#X=autoDF3[['cylinder','displacement','horsepower','weight']]
X=autoDF3[['horsepower']]
y=autoDF3['mpg']

# Create Cross Validation function
def computeCVError(X,y,folds):
    deg=[]
    mse=[]
    # For degree 1,2,3,..10
    degree1=[1,2,3,4,5,6,7,8,9,10]
    
    nK=len(X)/float(folds)
    xval_err=0
    for j in degree1: 
        # Split the data into 'folds'
        kf = KFold(len(X),n_folds=folds)
        for train_index, test_index in kf:
            # Partition the data acccording the fold indices generated
            X_train, X_test = X.iloc[train_index], X.iloc[test_index]
            y_train, y_test = y.iloc[train_index], y.iloc[test_index]  

            # Scale the X_train and X_test as per the polynomial degree 'j'
            poly = PolynomialFeatures(degree=j)             
            X_train_poly = poly.fit_transform(X_train)
            X_test_poly = poly.fit_transform(X_test)
            # Fit a polynomial regression
            linreg = LinearRegression().fit(X_train_poly, y_train)
            # Compute yhat or ypred
            y_pred = linreg.predict(X_test_poly)  
            # Compute MSE *(nK/N)
            test_mse = mean_squared_error(y_test, y_pred)*float(len(X_train))/float(len(X))  
            # Append to list for different folds
            mse.append(test_mse)
        # Compute the mean for poylnomial 'j' 
        deg.append(np.mean(mse))
        
    return(deg)

# Create and display a plot of K -Folds
df=pd.DataFrame()
for folds in [4,5,10]:
    cvError=computeCVError(X,y,folds)
    #print(cvError)
    df1=pd.DataFrame(cvError)
    df=pd.concat([df,df1],axis=1)
    #print(cvError)
    
df.columns=['4-fold','5-fold','10-fold']
df=df.reindex([1,2,3,4,5,6,7,8,9,10])
df
fig2=df.plot()
fig2=plt.title("K Fold Cross Validation - Cross Validation Error vs Degree of Polynomial")
fig2=plt.xlabel("Degree of Polynomial")
fig2=plt.ylabel("Cross validation Error")
fig2.figure.savefig('foo2.png', bbox_inches='tight')

output

This concludes this 2nd part of this series. I will look into model tuning and model selection in R and Python in the coming parts. Comments, suggestions and corrections are welcome!
To be continued….
Watch this space!

Also see

  1. Design Principles of Scalable, Distributed Systems
  2. Re-introducing cricketr! : An R package to analyze performances of cricketers
  3. Spicing up a IBM Bluemix cloud app with MongoDB and NodeExpress
  4. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
  5. Simulating an Edge Shape in Android

To see all posts see Index of posts

Practical Machine Learning with R and Python – Part 1


Introduction

This is the 1st part of a series of posts I intend to write on some common Machine Learning Algorithms in R and Python. In this first part I cover the following Machine Learning Algorithms

  • Univariate Regression
  • Multivariate Regression
  • Polynomial Regression
  • K Nearest Neighbors Regression

The code includes the implementation in both R and Python. This series of posts are based on the following 2 MOOC courses I did at Stanford Online and at Coursera

  1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford
  2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera

I have used the data sets from UCI Machine Learning repository(Communities and Crime and Auto MPG). I also use the Boston data set from MASS package

The content of this post and much more is now available as a compact book  on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo

While coding in R and Python I found that there were some aspects that were more convenient in one language and some in the other. For example, plotting the fit in R is straightforward in R, while computing the R squared, splitting as Train & Test sets etc. are already available in Python. In any case, these minor inconveniences can be easily be implemented in either language.

R squared computation in R is computed as follows
RSS=\sum (y-yhat)^{2}
TSS= \sum(y-mean(y))^{2}
Rsquared- 1-\frac{RSS}{TSS}

Note: You can download this R Markdown file and the associated data sets from Github at MachineLearning-RandPython
Note 1: This post was created as an R Markdown file in RStudio which has a cool feature of including R and Python snippets. The plot of matplotlib needs a workaround but otherwise this is a real cool feature of RStudio!

1.1a Univariate Regression – R code

Here a simple linear regression line is fitted between a single input feature and the target variable

# Source in the R function library
source("RFunctions.R")
# Read the Boston data file
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - Statistical Learning

# Split the data into training and test sets (75:25)
train_idx <- trainTestSplit(df,trainPercent=75,seed=5)
train <- df[train_idx, ]
test <- df[-train_idx, ]

# Fit a linear regression line between 'Median value of owner occupied homes' vs 'lower status of 
# population'
fit=lm(medv~lstat,data=df)
# Display details of fir
summary(fit)
## 
## Call:
## lm(formula = medv ~ lstat, data = df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -15.168  -3.990  -1.318   2.034  24.500 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 34.55384    0.56263   61.41   <2e-16 ***
## lstat       -0.95005    0.03873  -24.53   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.216 on 504 degrees of freedom
## Multiple R-squared:  0.5441, Adjusted R-squared:  0.5432 
## F-statistic: 601.6 on 1 and 504 DF,  p-value: < 2.2e-16
# Display the confidence intervals
confint(fit)
##                 2.5 %     97.5 %
## (Intercept) 33.448457 35.6592247
## lstat       -1.026148 -0.8739505
plot(df$lstat,df$medv, xlab="Lower status (%)",ylab="Median value of owned homes ($1000)", main="Median value of homes ($1000) vs Lowe status (%)")
abline(fit)
abline(fit,lwd=3)
abline(fit,lwd=3,col="red")

rsquared=Rsquared(fit,test,test$medv)
sprintf("R-squared for uni-variate regression (Boston.csv)  is : %f", rsquared)
## [1] "R-squared for uni-variate regression (Boston.csv)  is : 0.556964"

1.1b Univariate Regression – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
#os.chdir("C:\\software\\machine-learning\\RandPython")

# Read the CSV file
df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1")
# Select the feature variable
X=df['lstat']

# Select the target 
y=df['medv']

# Split into train and test sets (75:25)
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state = 0)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)

# Fit a linear model
linreg = LinearRegression().fit(X_train, y_train)

# Print the training and test R squared score
print('R-squared score (training): {:.3f}'.format(linreg.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'.format(linreg.score(X_test, y_test)))
     
# Plot the linear regression line
fig=plt.scatter(X_train,y_train)

# Create a range of points. Compute yhat=coeff1*x + intercept and plot
x=np.linspace(0,40,20)
fig1=plt.plot(x, linreg.coef_ * x + linreg.intercept_, color='red')
fig1=plt.title("Median value of homes ($1000) vs Lowe status (%)")
fig1=plt.xlabel("Lower status (%)")
fig1=plt.ylabel("Median value of owned homes ($1000)")
fig.figure.savefig('foo.png', bbox_inches='tight')
fig1.figure.savefig('foo1.png', bbox_inches='tight')
print "Finished"
## R-squared score (training): 0.571
## R-squared score (test): 0.458
## Finished

1.2a Multivariate Regression – R code

# Read crimes data
crimesDF <- read.csv("crimes.csv",stringsAsFactors = FALSE)

# Remove the 1st 7 columns which do not impact output
crimesDF1 <- crimesDF[,7:length(crimesDF)]

# Convert all to numeric
crimesDF2 <- sapply(crimesDF1,as.numeric)

# Check for NAs
a <- is.na(crimesDF2)
# Set to 0 as an imputation
crimesDF2[a] <-0
#Create as a dataframe
crimesDF2 <- as.data.frame(crimesDF2)
#Create a train/test split
train_idx <- trainTestSplit(crimesDF2,trainPercent=75,seed=5)
train <- crimesDF2[train_idx, ]
test <- crimesDF2[-train_idx, ]

# Fit a multivariate regression model between crimesPerPop and all other features
fit <- lm(ViolentCrimesPerPop~.,data=train)

# Compute and print R Squared
rsquared=Rsquared(fit,test,test$ViolentCrimesPerPop)
sprintf("R-squared for multi-variate regression (crimes.csv)  is : %f", rsquared)
## [1] "R-squared for multi-variate regression (crimes.csv)  is : 0.653940"

1.2b Multivariate Regression – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Read the data
crimesDF =pd.read_csv("crimes.csv",encoding="ISO-8859-1")
#Remove the 1st 7 columns
crimesDF1=crimesDF.iloc[:,7:crimesDF.shape[1]]
# Convert to numeric
crimesDF2 = crimesDF1.apply(pd.to_numeric, errors='coerce')
# Impute NA to 0s
crimesDF2.fillna(0, inplace=True)

# Select the X (feature vatiables - all)
X=crimesDF2.iloc[:,0:120]

# Set the target
y=crimesDF2.iloc[:,121]

X_train, X_test, y_train, y_test = train_test_split(X, y,random_state = 0)
# Fit a multivariate regression model
linreg = LinearRegression().fit(X_train, y_train)

# compute and print the R Square
print('R-squared score (training): {:.3f}'.format(linreg.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'.format(linreg.score(X_test, y_test)))
## R-squared score (training): 0.699
## R-squared score (test): 0.677

1.3a Polynomial Regression – R

For Polynomial regression , polynomials of degree 1,2 & 3 are used and R squared is computed. It can be seen that the quadaratic model provides the best R squared score and hence the best fit

 # Polynomial degree 1
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))

# Select key columns
df2 <- df1 %>% select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]

# Split as train and test sets
train_idx <- trainTestSplit(df3,trainPercent=75,seed=5)
train <- df3[train_idx, ]
test <- df3[-train_idx, ]

# Fit a model of degree 1
fit <- lm(mpg~. ,data=train)
rsquared1 <-Rsquared(fit,test,test$mpg)
sprintf("R-squared for Polynomial regression of degree 1 (auto_mpg.csv)  is : %f", rsquared1)
## [1] "R-squared for Polynomial regression of degree 1 (auto_mpg.csv)  is : 0.763607"
# Polynomial degree 2 - Quadratic
x = as.matrix(df3[1:6])
# Make a  polynomial  of degree 2 for feature variables before split
df4=as.data.frame(poly(x,2,raw=TRUE))
df5 <- cbind(df4,df3[7])

# Split into train and test set
train_idx <- trainTestSplit(df5,trainPercent=75,seed=5)
train <- df5[train_idx, ]
test <- df5[-train_idx, ]

# Fit the quadratic model
fit <- lm(mpg~. ,data=train)
# Compute R squared
rsquared2=Rsquared(fit,test,test$mpg)
sprintf("R-squared for Polynomial regression of degree 2 (auto_mpg.csv)  is : %f", rsquared2)
## [1] "R-squared for Polynomial regression of degree 2 (auto_mpg.csv)  is : 0.831372"
#Polynomial degree 3
x = as.matrix(df3[1:6])
# Make polynomial of degree 4  of feature variables before split
df4=as.data.frame(poly(x,3,raw=TRUE))
df5 <- cbind(df4,df3[7])
train_idx <- trainTestSplit(df5,trainPercent=75,seed=5)

train <- df5[train_idx, ]
test <- df5[-train_idx, ]
# Fit a model of degree 3
fit <- lm(mpg~. ,data=train)
# Compute R squared
rsquared3=Rsquared(fit,test,test$mpg)
sprintf("R-squared for Polynomial regression of degree 2 (auto_mpg.csv)  is : %f", rsquared3)
## [1] "R-squared for Polynomial regression of degree 2 (auto_mpg.csv)  is : 0.773225"
df=data.frame(degree=c(1,2,3),Rsquared=c(rsquared1,rsquared2,rsquared3))
# Make a plot of Rsquared and degree
ggplot(df,aes(x=degree,y=Rsquared)) +geom_point() + geom_line(color="blue") +
    ggtitle("Polynomial regression - R squared vs Degree of polynomial") +
    xlab("Degree") + ylab("R squared")

1.3a Polynomial Regression – Python

For Polynomial regression , polynomials of degree 1,2 & 3 are used and R squared is computed. It can be seen that the quadaratic model provides the best R squared score and hence the best fit

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
autoDF.shape
autoDF.columns
# Select key columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
# Convert columns to numeric
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
# Drop NAs
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['cylinder','displacement','horsepower','weight','acceleration','year']]
y=autoDF3['mpg']

# Polynomial degree 1
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state = 0)
linreg = LinearRegression().fit(X_train, y_train)
print('R-squared score - Polynomial degree 1 (training): {:.3f}'.format(linreg.score(X_train, y_train)))
# Compute R squared     
rsquared1 =linreg.score(X_test, y_test)
print('R-squared score - Polynomial degree 1 (test): {:.3f}'.format(linreg.score(X_test, y_test)))

# Polynomial degree 2
poly = PolynomialFeatures(degree=2)
X_poly = poly.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_poly, y,random_state = 0)
linreg = LinearRegression().fit(X_train, y_train)

# Compute R squared
print('R-squared score - Polynomial degree 2 (training): {:.3f}'.format(linreg.score(X_train, y_train)))
rsquared2 =linreg.score(X_test, y_test)
print('R-squared score - Polynomial degree 2 (test): {:.3f}\n'.format(linreg.score(X_test, y_test)))

#Polynomial degree 3

poly = PolynomialFeatures(degree=3)
X_poly = poly.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_poly, y,random_state = 0)
linreg = LinearRegression().fit(X_train, y_train)
print('(R-squared score -Polynomial degree 3  (training): {:.3f}'
     .format(linreg.score(X_train, y_train)))
# Compute R squared     
rsquared3 =linreg.score(X_test, y_test)
print('R-squared score Polynomial degree 3 (test): {:.3f}\n'.format(linreg.score(X_test, y_test)))
degree=[1,2,3]
rsquared =[rsquared1,rsquared2,rsquared3]
fig2=plt.plot(degree,rsquared)
fig2=plt.title("Polynomial regression - R squared vs Degree of polynomial")
fig2=plt.xlabel("Degree")
fig2=plt.ylabel("R squared")
fig2.figure.savefig('foo2.png', bbox_inches='tight')
print "Finished plotting and saving"
## R-squared score - Polynomial degree 1 (training): 0.811
## R-squared score - Polynomial degree 1 (test): 0.799
## R-squared score - Polynomial degree 2 (training): 0.861
## R-squared score - Polynomial degree 2 (test): 0.847
## 
## (R-squared score -Polynomial degree 3  (training): 0.933
## R-squared score Polynomial degree 3 (test): 0.710
## 
## Finished plotting and saving

1.4 K Nearest Neighbors

The code below implements KNN Regression both for R and Python. This is done for different neighbors. The R squared is computed in each case. This is repeated after performing feature scaling. It can be seen the model fit is much better after feature scaling. Normalization refers to

X_{normalized} = \frac{X-min(X)}{max(X-min(X))}

Another technique that is used is Standardization which is

X_{standardized} = \frac{X-mean(X)}{sd(X)}

1.4a K Nearest Neighbors Regression – R( Unnormalized)

The R code below does not use feature scaling

# KNN regression requires the FNN package
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
df2 <- df1 %>% select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]

# Split train and test
train_idx <- trainTestSplit(df3,trainPercent=75,seed=5)
train <- df3[train_idx, ]
test <- df3[-train_idx, ]
#  Select the feature variables
train.X=train[,1:6]
# Set the target for training
train.Y=train[,7]
# Do the same for test set
test.X=test[,1:6]
test.Y=test[,7]

rsquared <- NULL
# Create a list of neighbors
neighbors <-c(1,2,4,8,10,14)
for(i in seq_along(neighbors)){
    # Perform a KNN regression fit
    knn=knn.reg(train.X,test.X,train.Y,k=neighbors[i])
    # Compute R sqaured
    rsquared[i]=knnRSquared(knn$pred,test.Y)
}

# Make a dataframe for plotting
df <- data.frame(neighbors,Rsquared=rsquared)
# Plot the number of neighors vs the R squared
ggplot(df,aes(x=neighbors,y=Rsquared)) + geom_point() +geom_line(color="blue") +
    xlab("Number of neighbors") + ylab("R squared") +
    ggtitle("KNN regression - R squared vs Number of Neighors (Unnormalized)")

1.4b K Nearest Neighbors Regression – Python( Unnormalized)

The Python code below does not use feature scaling

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.neighbors import KNeighborsRegressor
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['cylinder','displacement','horsepower','weight','acceleration','year']]
y=autoDF3['mpg']

# Perform a train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# Create a list of neighbors
rsquared=[]
neighbors=[1,2,4,8,10,14]
for i in neighbors:
        # Fit a KNN model
        knnreg = KNeighborsRegressor(n_neighbors = i).fit(X_train, y_train)
        # Compute R squared
        rsquared.append(knnreg.score(X_test, y_test))
        print('R-squared test score: {:.3f}'
        .format(knnreg.score(X_test, y_test)))
# Plot the number of neighors vs the R squared        
fig3=plt.plot(neighbors,rsquared)
fig3=plt.title("KNN regression - R squared vs Number of neighbors(Unnormalized)")
fig3=plt.xlabel("Neighbors")
fig3=plt.ylabel("R squared")
fig3.figure.savefig('foo3.png', bbox_inches='tight')
print "Finished plotting and saving"
## R-squared test score: 0.527
## R-squared test score: 0.678
## R-squared test score: 0.707
## R-squared test score: 0.684
## R-squared test score: 0.683
## R-squared test score: 0.670
## Finished plotting and saving

1.4c K Nearest Neighbors Regression – R( Normalized)

It can be seen that R squared improves when the features are normalized.

df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
df2 <- df1 %>% select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]

# Perform MinMaxScaling of feature variables 
train.X.scaled=MinMaxScaler(train.X)
test.X.scaled=MinMaxScaler(test.X)

# Create a list of neighbors
rsquared <- NULL
neighbors <-c(1,2,4,6,8,10,12,15,20,25,30)
for(i in seq_along(neighbors)){
    # Fit a KNN model
    knn=knn.reg(train.X.scaled,test.X.scaled,train.Y,k=i)
    # Compute R ssquared
    rsquared[i]=knnRSquared(knn$pred,test.Y)
    
}

df <- data.frame(neighbors,Rsquared=rsquared)
# Plot the number of neighors vs the R squared 
ggplot(df,aes(x=neighbors,y=Rsquared)) + geom_point() +geom_line(color="blue") +
    xlab("Number of neighbors") + ylab("R squared") +
    ggtitle("KNN regression - R squared vs Number of Neighors(Normalized)")

1.4d K Nearest Neighbors Regression – Python( Normalized)

R squared improves when the features are normalized with MinMaxScaling

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import MinMaxScaler
autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1")
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['cylinder','displacement','horsepower','weight','acceleration','year']]
y=autoDF3['mpg']

# Perform a train/ test  split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# Use MinMaxScaling
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
# Apply scaling on test set
X_test_scaled = scaler.transform(X_test)

# Create a list of neighbors
rsquared=[]
neighbors=[1,2,4,6,8,10,12,15,20,25,30]
for i in neighbors:
    # Fit a KNN model
    knnreg = KNeighborsRegressor(n_neighbors = i).fit(X_train_scaled, y_train)
    # Compute R squared
    rsquared.append(knnreg.score(X_test_scaled, y_test))
    print('R-squared test score: {:.3f}'
        .format(knnreg.score(X_test_scaled, y_test)))

# Plot the number of neighors vs the R squared 
fig4=plt.plot(neighbors,rsquared)
fig4=plt.title("KNN regression - R squared vs Number of neighbors(Normalized)")
fig4=plt.xlabel("Neighbors")
fig4=plt.ylabel("R squared")
fig4.figure.savefig('foo4.png', bbox_inches='tight')
print "Finished plotting and saving"
## R-squared test score: 0.703
## R-squared test score: 0.810
## R-squared test score: 0.830
## R-squared test score: 0.838
## R-squared test score: 0.834
## R-squared test score: 0.828
## R-squared test score: 0.827
## R-squared test score: 0.826
## R-squared test score: 0.816
## R-squared test score: 0.815
## R-squared test score: 0.809
## Finished plotting and saving

Conclusion

In this initial post I cover the regression models when the output is continous. I intend to touch upon other Machine Learning algorithms.
Comments, suggestions and corrections are welcome.

Watch this this space!

To be continued….

You may like
1. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
2. Neural Networks: The mechanics of backpropagation
3. More book, more cricket! 2nd edition of my books now on Amazon
4. Spicing up a IBM Bluemix cloud app with MongoDB and NodeExpress
5. Introducing cricket package yorkr:Part 4-In the block hole!

To see all posts see Index of posts

Neural Networks: On Perceptrons and Sigmoid Neurons


Neural Networks had their beginnings in 1943 when Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how neurons might work.  Much later in 1958, Frank Rosenblatt, a neuro-biologist proposed the Perceptron. The Perceptron is a computer model or computerized machine which is devised to represent or simulate the ability of the brain to recognize and discriminate. In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers

Initially it was believed that  Perceptrons were capable of many things including “the ability to walk, talk, see, write, reproduce itself and be conscious of its existence.”

However, a subsequent paper by Marvin Minky and Seymour Papert of MIT, titled “Perceptrons” proved that the Perceptron was truly limited in its functionality. Specifically they showed that the Perceptron was incapable of producing XOR functionality. The Perceptron is only capable of classification where the data points are linearly separable.

This post implements the simple learning algorithm of the ‘Linear Perceptron’ and the ‘Sigmoid Perceptron’.  The implementation has been done in Octave. This implementation is based on “Neural networks for Machine Learning” course by Prof Geoffrey Hinton at Coursera

Perceptron learning procedure
z = ∑wixi  + b
where wi is the ith weight and xi is the ith  feature

For every training case compute the activation output zi

  • If the output classifies correctly, leave the weights alone
  • If the output classifies a ‘0’ as a ‘1’, then subtract the the feature from the weight
  • If the output classifies a ‘0’ as a ‘1’, then add the feature to the weight

This simple neural network is represented below
perceptron

Sigmoid neuron learning procedure
zi = sigmoid(∑wixi  + b)
where sigmoid is
sigmoid(z) = 1/1+e^{-z}

Hence
z_{i} = 1/1+e^{-(\sum w_{i}x_{i}+b)}
For every training case compute the activation output zi

  • If the output classifies correctly, leave the weights alone
  • If the output incorrectly classifies a ‘0’ as a ‘1’ i.e. z_{i} >sigmoid(0), then subtract the feature from the weight
  • If the output incorrectly classifies a ‘1’ as ‘0’ i.e., i.e z_{i} < sigmoid(0), then add the feature to the weight
  • Iterate till errors <= 1

This is shown below
sigmoid_neuron

I have implemented the learning algorithm of the Perceptron and Sigmoid Neuron in Octave. The code is available at Github at Perceptron.

  1. Perceptron execution

I performed the tests on 2 different datasets

Data 1
untitled

Data 2
untitled

2. Sigmoid Perceptron execution
Data 1 & Data 2

It can be seen that the Perceptron does work for simple linearly separable data. I will be implementing other more advanced Neural Networks in the months to come.

Watch this space!