# Deep Learning from first principles in Python, R and Octave – Part 2

“What does the world outside your head really ‘look’ like? Not only is there no color, there’s also no sound: the compression and expansion of air is picked up by the ears, and turned into electrical signals. The brain then presents these signals to us as mellifluous tones and swishes and clatters and jangles. Reality is also odorless: there’s no such thing as smell outside our brains. Molecules floating through the air bind to receptors in our nose and are interpreted as different smells by our brain. The real world is not full of rich sensory events; instead, our brains light up the world with their own sensuality.”
The Brain: The Story of You” by David Eagleman

The world is Maya, illusory. The ultimate reality, the Brahman, is all-pervading and all-permeating, which is colourless, odourless, tasteless, nameless and formless

## 1. Introduction

This post is a follow-up post to my earlier post Deep Learning from first principles in Python, R and Octave-Part 1. In the first part, I implemented Logistic Regression, in vectorized Python,R and Octave, with a wannabe Neural Network (a Neural Network with no hidden layers). In this second part, I implement a regular, but somewhat primitive Neural Network (a Neural Network with just 1 hidden layer). The 2nd part implements classification of manually created datasets, where the different clusters of the 2 classes are not linearly separable.

Neural Network perform really well in learning all sorts of non-linear boundaries between classes. Initially logistic regression is used perform the classification and the decision boundary is plotted. Vanilla logistic regression performs quite poorly. Using SVMs with a radial basis kernel would have performed much better in creating non-linear boundaries. To see R and Python implementations of SVMs take a look at my post Practical Machine Learning with R and Python – Part 4.

You could also check out my book on Amazon Practical Machine Learning with R and Python – Machine Learning in Stereo,  in which I implement several Machine Learning algorithms on regression and classification, along with other necessary metrics that are used in Machine Learning.

You can clone and fork this R Markdown file along with the vectorized implementations of the 3 layer Neural Network for Python, R and Octave from Github DeepLearning-Part2

### 2. The 3 layer Neural Network

A simple representation of a 3 layer Neural Network (NN) with 1 hidden layer is shown below.

In the above Neural Network, there are 2 input features at the input layer, 3 hidden units at the hidden layer and 1 output layer as it deals with binary classification. The activation unit at the hidden layer can be a tanh, sigmoid, relu etc. At the output layer the activation is a sigmoid to handle binary classification

# Superscript indicates layer 1
$z_{11} = w_{11}^{1}x_{1} + w_{21}^{1}x_{2} + b_{1}$
$z_{12} = w_{12}^{1}x_{1} + w_{22}^{1}x_{2} + b_{1}$
$z_{13} = w_{13}^{1}x_{1} + w_{23}^{1}x_{2} + b_{1}$

Also $a_{11} = tanh(z_{11})$
$a_{12} = tanh(z_{12})$
$a_{13} = tanh(z_{13})$

# Superscript indicates layer 2
$z_{21} = w_{11}^{2}a_{11} + w_{21}^{2}a_{12} + w_{31}^{2}a_{13} + b_{2}$
$a_{21} = sigmoid(z21)$

Hence
$Z1= \begin{pmatrix} z11\\ z12\\ z13 \end{pmatrix} =\begin{pmatrix} w_{11}^{1} & w_{21}^{1} \\ w_{12}^{1} & w_{22}^{1} \\ w_{13}^{1} & w_{23}^{1} \end{pmatrix} * \begin{pmatrix} x1\\ x2 \end{pmatrix} + b_{1}$
And
$A1= \begin{pmatrix} a11\\ a12\\ a13 \end{pmatrix} = \begin{pmatrix} tanh(z11)\\ tanh(z12)\\ tanh(z13) \end{pmatrix}$

Similarly
$Z2= z_{21} = \begin{pmatrix} w_{11}^{2} & w_{21}^{2} & w_{31}^{2} \end{pmatrix} *\begin{pmatrix} z_{11}\\ z_{12}\\ z_{13} \end{pmatrix} +b_{2}$
and $A2 = a_{21} = sigmoid(z_{21})$

These equations can be written as
$Z1 = W1 * X + b1$
$A1 = tanh(Z1)$
$Z2 = W2 * A1 + b2$
$A2 = sigmoid(Z2)$

I) Some important results (a memory refresher!)
$d/dx(e^{x}) = e^{x}$ and $d/dx(e^{-x}) = -e^{-x}$ -(a) and
$sinhx = (e^{x} - e^{-x})/2$ and $coshx = (e^{x} + e^{-x})/2$
Using (a) we can shown that $d/dx(sinhx) = coshx$ and $d/dx(coshx) = sinhx$ (b)
Now $d/dx(f(x)/g(x)) = (g(x)*d/dx(f(x)) - f(x)*d/dx(g(x)))/g(x)^{2}$ -(c)

Since $tanhx =z= sinhx/coshx$ and using (b) we get
$tanhx = (coshx*d/dx(sinhx) - coshx*d/dx(sinhx))/(1-sinhx^{2})$
Using the values of the derivatives of sinhx and coshx from (b) above we get
$d/dx(tanhx) = (coshx^{2} - sinhx{2})/coshx{2} = 1 - tanhx^{2}$
Since $tanhx =z$
$d/dx(tanhx) = 1 - tanhx^{2}= 1 - z^{2}$ -(d)

II) Derivatives
$L=-(Ylog(A2) + (1-Y)log(1-A2))$
$dL/dA2 = -(Y/A2 + (1-Y)/(1-A2))$
Since $A2 = sigmoid(Z2)$ therefore $dA2/dZ2 = A2(1-A2)$ see Part1
$Z2 = W2A1 +b2$
$dZ2/dW2 = A1$
$dZ2/db2 = 1$
$A1 = tanh(Z1)$ and $dA1/dZ1 = 1 - A1^{2}$
$Z1 = W1X + b1$
$dZ1/dW1 = X$
$dZ1/db1 = 1$

III) Back propagation
Using the derivatives from II) we can derive the following results using Chain Rule
$\partial L/\partial Z2 = \partial L/\partial A2 * \partial A2/\partial Z2$
$= -(Y/A2 + (1-Y)/(1-A2)) * A2(1-A2) = A2 - Y$
$\partial L/\partial W2 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial W2$
$= (A2-Y) *A1$ -(A)
$\partial L/\partial b2 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial b2 = (A2-Y)$ -(B)

$\partial L/\partial Z1 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial A1 *\partial A1/\partial Z1 = (A2-Y) * W2 * (1-A1^{2})$
$\partial L/\partial W1 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial A1 *\partial A1/\partial Z1 *\partial Z1/\partial W1$
$=(A2-Y) * W2 * (1-A1^{2}) * X$ -(C)
$\partial L/\partial b1 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial A1 *dA1/dZ1 *dZ1/db1$
$= (A2-Y) * W2 * (1-A1^{2})$ -(D)

The key computations in the backward cycle are
$W1 = W1-learningRate * \partial L/\partial W1$ – From (C)
$b1 = b1-learningRate * \partial L/\partial b1$ – From (D)
$W2 = W2-learningRate * \partial L/\partial W2$ – From (A)
$b2 = b2-learningRate * \partial L/\partial b2$ – From (B)

The weights and biases (W1,b1,W2,b2) are updated for each iteration thus minimizing the loss/cost.

These derivations can be represented pictorially using the computation graph (from the book Deep Learning by Ian Goodfellow, Joshua Bengio and Aaron Courville)

### 3. Manually create a data set that is not lineary separable

Initially I create a dataset with 2 classes which has around 9 clusters that cannot be separated by linear boundaries. Note: This data set is saved as data.csv and is used for the R and Octave Neural networks to see how they perform on the same dataset.

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
X, y = make_blobs(n_samples = 400, n_features = 2, centers = 7,
cluster_std = 1.3, random_state = 4)
#Create 2 classes
y=y.reshape(400,1)
y = y % 2
#Plot the figure
plt.figure()
plt.title('Non-linearly separable classes')
plt.scatter(X[:,0], X[:,1], c=y,
marker= 'o', s=50,cmap=cmap)
plt.savefig('fig1.png', bbox_inches='tight')

### 4. Logistic Regression

On the above created dataset, classification with logistic regression is performed, and the decision boundary is plotted. It can be seen that logistic regression performs quite poorly

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

#from DLfunctions import plot_decision_boundary
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!

colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
X, y = make_blobs(n_samples = 400, n_features = 2, centers = 7,
cluster_std = 1.3, random_state = 4)
#Create 2 classes
y=y.reshape(400,1)
y = y % 2

# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X, y);

# Plot the decision boundary for logistic regression
plot_decision_boundary_n(lambda x: clf.predict(x), X.T, y.T,"fig2.png")


### 5. The 3 layer Neural Network in Python (vectorized)

The vectorized implementation is included below. Note that in the case of Python a learning rate of 0.5 and 3 hidden units performs very well.

## Random data set with 9 clusters
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd

from sklearn.datasets import make_classification, make_blobs
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!

X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T

parameters,costs = computeNN(X2, Y2, numHidden = 4, learningRate=0.5, numIterations = 10000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(4),str(0.5),"fig3.png")
## Cost after iteration 0: 0.692669
## Cost after iteration 1000: 0.246650
## Cost after iteration 2000: 0.227801
## Cost after iteration 3000: 0.226809
## Cost after iteration 4000: 0.226518
## Cost after iteration 5000: 0.226331
## Cost after iteration 6000: 0.226194
## Cost after iteration 7000: 0.226085
## Cost after iteration 8000: 0.225994
## Cost after iteration 9000: 0.225915

### 6. The 3 layer Neural Network in R (vectorized)

For this the dataset created by Python is saved  to see how R performs on the same dataset. The vectorized implementation of a Neural Network was just a little more interesting as R does not have a similar package like ‘numpy’. While numpy handles broadcasting implicitly, in R I had to use the ‘sweep’ command to broadcast. The implementaion is included below. Note that since the initialization with random weights is slightly different, R performs best with a learning rate of 0.1 and with 6 hidden units

source("DLfunctions2_1.R")
z <- as.matrix(read.csv("data.csv",header=FALSE)) #
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
nn <-computeNN(x1, y1, 6, learningRate=0.1,numIterations=10000) # Good
## [1] 0.7075341
## [1] 0.2606695
## [1] 0.2198039
## [1] 0.2091238
## [1] 0.211146
## [1] 0.2108461
## [1] 0.2105351
## [1] 0.210211
## [1] 0.2099104
## [1] 0.2096437
## [1] 0.209409
plotDecisionBoundary(z,nn,6,0.1)

### 7.  The 3 layer Neural Network in Octave (vectorized)

This uses the same dataset that was generated using Python code.
source("DL-function2.m") data=csvread("data.csv"); X=data(:,1:2); Y=data(:,3); # Make sure that the model parameters are correct. Take the transpose of X & Y
#Perform gradient descent [W1,b1,W2,b2,costs]= computeNN(X', Y',4, learningRate=0.5, numIterations = 10000);

### 8a. Performance  for different learning rates (Python)

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd

from sklearn.datasets import make_classification, make_blobs
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!
# Create data
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Create a list of learning rates
learningRate=[0.5,1.2,3.0]
df=pd.DataFrame()
#Compute costs for each learning rate
for lr in learningRate:
parameters,costs = computeNN(X2, Y2, numHidden = 4, learningRate=lr, numIterations = 10000)
print(costs)
df1=pd.DataFrame(costs)
df=pd.concat([df,df1],axis=1)
#Set the iterations
iterations=[0,1000,2000,3000,4000,5000,6000,7000,8000,9000]
#Create data frame
#Set index
df1=df.set_index([iterations])
df1.columns=[0.5,1.2,3.0]
fig=df1.plot()
fig=plt.title("Cost vs No of Iterations for different learning rates")
plt.savefig('fig4.png', bbox_inches='tight')

### 8b. Performance  for different hidden units (Python)

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd

from sklearn.datasets import make_classification, make_blobs
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!
#Create data set
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Make a list of hidden unis
numHidden=[3,5,7]
df=pd.DataFrame()
#Compute costs for different hidden units
for numHid in numHidden:
parameters,costs = computeNN(X2, Y2, numHidden = numHid, learningRate=1.2, numIterations = 10000)
print(costs)
df1=pd.DataFrame(costs)
df=pd.concat([df,df1],axis=1)
#Set the iterations
iterations=[0,1000,2000,3000,4000,5000,6000,7000,8000,9000]
#Set index
df1=df.set_index([iterations])
df1.columns=[3,5,7]
#Plot
fig=df1.plot()
fig=plt.title("Cost vs No of Iterations for different no of hidden units")
plt.savefig('fig5.png', bbox_inches='tight')

### 9a. Performance  for different learning rates (R)

source("DLfunctions2_1.R")
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
#Loop through learning rates and compute costs
learningRate <-c(0.1,1.2,3.0)
df <- NULL
for(i in seq_along(learningRate)){
nn <-  computeNN(x1, y1, 6, learningRate=learningRate[i],numIterations=10000)
cost <- nn$costs df <- cbind(df,cost) }   #Create dataframe df <- data.frame(df) iterations=seq(0,10000,by=1000) df <- cbind(iterations,df) names(df) <- c("iterations","0.5","1.2","3.0") library(reshape2) df1 <- melt(df,id="iterations") # Melt the data #Plot ggplot(df1) + geom_line(aes(x=iterations,y=value,colour=variable),size=1) + xlab("Iterations") + ylab('Cost') + ggtitle("Cost vs No iterations for different learning rates") ### 9b. Performance for different hidden units (R) source("DLfunctions2_1.R") # Loop through Num hidden units numHidden <-c(4,6,9) df <- NULL for(i in seq_along(numHidden)){ nn <- computeNN(x1, y1, numHidden[i], learningRate=0.1,numIterations=10000) cost <- nn$costs
df <- cbind(df,cost)

}      
df <- data.frame(df)
iterations=seq(0,10000,by=1000)
df <- cbind(iterations,df)
names(df) <- c("iterations","4","6","9")
library(reshape2)
# Melt
df1 <- melt(df,id="iterations")
# Plot
ggplot(df1) + geom_line(aes(x=iterations,y=value,colour=variable),size=1)  +
xlab("Iterations") +
ylab('Cost') + ggtitle("Cost vs No iterations for  different number of hidden units")

## 10a. Performance of the Neural Network for different learning rates (Octave)

source("DL-function2.m") plotLRCostVsIterations() print -djph figa.jpg

## 10b. Performance of the Neural Network for different number of hidden units (Octave)

source("DL-function2.m") plotHiddenCostVsIterations() print -djph figa.jpg

## 11. Turning the heat on the Neural Network

In this 2nd part I create a a central region of positives and and the outside region as negatives. The points are generated using the equation of a circle (x – a)^{2} + (y -b) ^{2} = R^{2} . How does the 3 layer Neural Network perform on this?  Here’s a look! Note: The same dataset is also used for R and Octave Neural Network constructions

## 12. Manually creating a circular central region

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
x1=np.random.uniform(0,10,800).reshape(800,1)
x2=np.random.uniform(0,10,800).reshape(800,1)
X=np.append(x1,x2,axis=1)
X.shape
# Create (x-a)^2 + (y-b)^2 = R^2
# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
a=(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2) <= 6).ravel()
Y=a.reshape(800,1)

cmap = matplotlib.colors.ListedColormap(colors)

plt.figure()
plt.title('Non-linearly separable classes')
plt.scatter(X[:,0], X[:,1], c=Y,
marker= 'o', s=15,cmap=cmap)
plt.savefig('fig6.png', bbox_inches='tight')

### 13a. Decision boundary with hidden units=4 and learning rate = 2.2 (Python)

With the above hyper parameters the decision boundary is triangular

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model
execfile("./DLfunctions.py")
x1=np.random.uniform(0,10,800).reshape(800,1)
x2=np.random.uniform(0,10,800).reshape(800,1)
X=np.append(x1,x2,axis=1)
X.shape

# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
a=(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2) <= 6).ravel()
Y=a.reshape(800,1)

X2=X.T
Y2=Y.T

parameters,costs = computeNN(X2, Y2, numHidden = 4, learningRate=2.2, numIterations = 10000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(4),str(2.2),"fig7.png")

## Cost after iteration 0: 0.692836
## Cost after iteration 1000: 0.331052
## Cost after iteration 2000: 0.326428
## Cost after iteration 3000: 0.474887
## Cost after iteration 4000: 0.247989
## Cost after iteration 5000: 0.218009
## Cost after iteration 6000: 0.201034
## Cost after iteration 7000: 0.197030
## Cost after iteration 8000: 0.193507
## Cost after iteration 9000: 0.191949

### 13b. Decision boundary with hidden units=12 and learning rate = 2.2 (Python)

With the above hyper parameters the decision boundary is triangular

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model
execfile("./DLfunctions.py")
x1=np.random.uniform(0,10,800).reshape(800,1)
x2=np.random.uniform(0,10,800).reshape(800,1)
X=np.append(x1,x2,axis=1)
X.shape

# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
a=(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2) <= 6).ravel()
Y=a.reshape(800,1)

X2=X.T
Y2=Y.T

parameters,costs = computeNN(X2, Y2, numHidden = 12, learningRate=2.2, numIterations = 10000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(12),str(2.2),"fig8.png")

## Cost after iteration 0: 0.693291
## Cost after iteration 1000: 0.383318
## Cost after iteration 2000: 0.298807
## Cost after iteration 3000: 0.251735
## Cost after iteration 4000: 0.177843
## Cost after iteration 5000: 0.130414
## Cost after iteration 6000: 0.152400
## Cost after iteration 7000: 0.065359
## Cost after iteration 8000: 0.050921
## Cost after iteration 9000: 0.039719

### 14a. Decision boundary with hidden units=9 and learning rate = 0.5 (R)

When the number of hidden units is 6 and the learning rate is 0,1, is also a triangular shape in R

source("DLfunctions2_1.R")
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
nn <-computeNN(x1, y1, 9, learningRate=0.5,numIterations=10000) # Triangular
## [1] 0.8398838
## [1] 0.3303621
## [1] 0.3127731
## [1] 0.3012791
## [1] 0.3305543
## [1] 0.3303964
## [1] 0.2334615
## [1] 0.1920771
## [1] 0.2341225
## [1] 0.2188118
## [1] 0.2082687
plotDecisionBoundary(z,nn,6,0.1)

### 14b. Decision boundary with hidden units=8 and learning rate = 0.1 (R)

source("DLfunctions2_1.R")
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
nn <-computeNN(x1, y1, 8, learningRate=0.1,numIterations=10000) # Hemisphere
## [1] 0.7273279
## [1] 0.3169335
## [1] 0.2378464
## [1] 0.1688635
## [1] 0.1368466
## [1] 0.120664
## [1] 0.111211
## [1] 0.1043362
## [1] 0.09800573
## [1] 0.09126161
## [1] 0.0840379
plotDecisionBoundary(z,nn,8,0.1)

### 15a. Decision boundary with hidden units=12 and learning rate = 1.5 (Octave)

source("DL-function2.m") data=csvread("data1.csv"); X=data(:,1:2); Y=data(:,3); # Make sure that the model parameters are correct. Take the transpose of X & Y [W1,b1,W2,b2,costs]= computeNN(X', Y',12, learningRate=1.5, numIterations = 10000); plotDecisionBoundary(data, W1,b1,W2,b2) print -djpg fige.jpg

Conclusion: This post implemented a 3 layer Neural Network to create non-linear boundaries while performing classification. Clearly the Neural Network performs very well when the number of hidden units and learning rate are varied.

To be continued…
Watch this space!!

To see all posts check Index of posts

# Deep Learning from first principles in Python, R and Octave – Part 1

“You don’t perceive objects as they are. You perceive them as you are.”
“Your interpretation of physical objects has everything to do with the historical trajectory of your brain – and little to do with the objects themselves.”
“The brain generates its own reality, even before it receives information coming in from the eyes and the other senses. This is known as the internal model”

                          David Eagleman - The Brain: The Story of You

This is the first in the series of posts, I intend to write on Deep Learning. This post is inspired by the Deep Learning Specialization by Prof Andrew Ng on Coursera and Neural Networks for Machine Learning by Prof Geoffrey Hinton also on Coursera. In this post I implement Logistic regression with a 2 layer Neural Network i.e. a Neural Network that just has an input layer and an output layer and with no hidden layer.I am certain that any self-respecting Deep Learning/Neural Network would consider a Neural Network without hidden layers as no Neural Network at all!

This 2 layer network is implemented in Python, R and Octave languages. I have included Octave, into the mix, as Octave is a close cousin of Matlab. These implementations in Python, R and Octave are equivalent vectorized implementations. So, if you are familiar in any one of the languages, you should be able to look at the corresponding code in the other two. You can download this R Markdown file and Octave code from DeepLearning -Part 1

To start with, Logistic Regression is performed using sklearn’s logistic regression package for the cancer data set also from sklearn. This is shown below

## 1. Logistic Regression

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification, make_blobs

from sklearn.metrics import confusion_matrix
from matplotlib.colors import ListedColormap
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)
# Call the Logisitic Regression function
clf = LogisticRegression().fit(X_train, y_train)
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
## Accuracy of Logistic regression classifier on training set: 0.96
## Accuracy of Logistic regression classifier on test set: 0.96

To check on other classification algorithms, check my post Practical Machine Learning with R and Python – Part 2. You could also take a look at my book, available on Amazon , in which I implement the most popular Machine Learning algorithms in both R and Python – My book ‘Practical Machine Learning with R and Python’ on Amazon

## 2. Logistic Regression as a 2 layer Neural Network

In the following section Logistic Regression is implemented as a 2 layer Neural Network in Python, R and Octave. The same cancer data set from sklearn will be used to train and test the Neural Network in Python, R and Octave. This can be represented diagrammatically as below

The cancer data set has 30 input features, and the target variable ‘output’ is either 0 or 1. Hence the sigmoid activation function will be used in the output layer for classification.

This simple 2 layer Neural Network is shown below
At the input layer there are 30 features and the corresponding weights of these inputs which are initialized to small random values.
$Z= w_{1}x_{1} +w_{2}x_{2} +..+ w_{30}x_{30} + b$
where ‘b’ is the bias term

The Activation function is the sigmoid function which is $a= 1/(1+e^{-z})$
The Loss, when the sigmoid function is used in the output layer, is given by
$L=-(ylog(a) + (1-y)log(1-a))$ (1)

### Forward propagation

In forward propagation cycle of the Neural Network the output Z and the output of activation function, the sigmoid function, is first computed. Then using the output ‘y’ for the given features, the ‘Loss’ is computed using equation (1) above.

### Backward propagation

The backward propagation cycle determines how the ‘Loss’ is impacted for small variations from the previous layers upto the input layer. In other words, backward propagation computes the changes in the weights at the input layer, which will minimize the loss. Several cycles of gradient descent are performed in the path of steepest descent to find the local minima. In other words the set of weights and biases, at the input layer, which will result in the lowest loss is computed by gradient descent. The weights at the input layer are decreased by a parameter known as the ‘learning rate’. Too big a ‘learning rate’ can overshoot the local minima, and too small a ‘learning rate’ can take a long time to reach the local minima. This is done for ‘m’ training examples.

Chain rule of differentiation
Let y=f(u)
and u=g(x) then
$\partial y/\partial x = \partial y/\partial u * \partial u/\partial x$

Derivative of sigmoid
$\sigma=1/(1+e^{-z})$
Let $x= 1 + e^{-z}$  then
$\sigma = 1/x$
$\partial \sigma/\partial x = -1/x^{2}$
$\partial x/\partial z = -e^{-z}$
Using the chain rule of differentiation we get
$\partial \sigma/\partial z = \partial \sigma/\partial x * \partial x/\partial z$
$=-1/(1+e^{-z})^{2}* -e^{-z} = e^{-z}/(1+e^{-z})^{2}$
Therefore $\partial \sigma/\partial z = \sigma(1-\sigma)$        -(2)

The 3 equations for the 2 layer Neural Network representation of Logistic Regression are
$L=-(y*log(a) + (1-y)*log(1-a))$      -(a)
$a=1/(1+e^{-z})$      -(b)
$Z= w_{1}x_{1} +w_{2}x_{2} +...+ w_{30}x_{30} +b$   -(c)

The back propagation step requires the computation of $dL/dw_{i}$ and $dL/db_{i}$. In the case of regression it would be $dE/dw_{i}$ and $dE/db_{i}$ where dE is the Mean Squared Error function.
Computing the derivatives for back propagation we have
$dL/da = -(y/a + (1-y)/(1-a))$          -(d)
because $d/dx(logx) = 1/x$
Also from equation (2) we get
$da/dZ = a (1-a)$                                  – (e)
By chain rule
$\partial L/\partial Z = \partial L/\partial a * \partial a/\partial Z$
therefore substituting the results of (d) & (e) we get
$\partial L/\partial Z = -(y/a + (1-y)/(1-a)) * a(1-a) = a-y$         (f)
Finally
$\partial L/\partial w_{i}= \partial L/\partial a * \partial a/\partial Z * \partial Z/\partial w_{i}$                                                           -(g)
$\partial Z/\partial w_{i} = x_{i}$            – (h)
and from (f) we have  $\partial L/\partial Z =a-y$
Therefore  (g) reduces to
$\partial L/\partial w_{i} = x_{i}* (a-y)$ -(i)
Also
$\partial L/\partial b = \partial L/\partial a * \partial a/\partial Z * \partial Z/\partial b$ -(j)
Since
$\partial Z/\partial b = 1$
$\partial L/\partial b = a-y$

The gradient computes the weights at the input layer and the corresponding bias by using the values
of $dw_{i}$ and $db$
$w_{i} := w_{i} -\alpha * dw_{i}$
$b := b -\alpha * db$
I found the computation graph representation in the book Deep Learning: Ian Goodfellow, Yoshua Bengio, Aaron Courville, very useful to visualize and also compute the backward propagation. For the 2 layer Neural Network of Logistic Regression the computation graph is shown below

### 3. Neural Network for Logistic Regression -Python code (vectorized)

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

# Define the sigmoid function
def sigmoid(z):
a=1/(1+np.exp(-z))
return a

# Initialize
def initialize(dim):
w = np.zeros(dim).reshape(dim,1)
b = 0
return w

# Compute the loss
def computeLoss(numTraining,Y,A):
loss=-1/numTraining *np.sum(Y*np.log(A) + (1-Y)*(np.log(1-A)))
return(loss)

# Execute the forward propagation
def forwardPropagation(w,b,X,Y):
# Compute Z
Z=np.dot(w.T,X)+b
# Determine the number of training samples
numTraining=float(len(X))
# Compute the output of the sigmoid activation function
A=sigmoid(Z)
#Compute the loss
loss = computeLoss(numTraining,Y,A)
# Compute the gradients dZ, dw and db
dZ=A-Y
dw=1/numTraining*np.dot(X,dZ.T)
db=1/numTraining*np.sum(dZ)

# Return the results as a dictionary
"db": db}
loss = np.squeeze(loss)

def gradientDescent(w, b, X, Y, numIerations, learningRate):
losses=[]
idx =[]
# Iterate
for i in range(numIerations):
#Get the derivates
w = w-learningRate*dw
b = b-learningRate*db

# Store the loss
if i % 100 == 0:
idx.append(i)
losses.append(loss)
params = {"w": w,
"b": b}
"db": db}

# Predict the output for a training set
def predict(w,b,X):
size=X.shape[1]
yPredicted=np.zeros((1,size))
Z=np.dot(w.T,X)
# Compute the sigmoid
A=sigmoid(Z)
for i in range(A.shape[1]):
#If the value is > 0.5 then set as 1
if(A[0][i] > 0.5):
yPredicted[0][i]=1
else:
# Else set as 0
yPredicted[0][i]=0

return yPredicted

#Normalize the data
def normalize(x):
x_norm = None
x_norm = np.linalg.norm(x,axis=1,keepdims=True)
x= x/x_norm
return x

# Run the 2 layer Neural Network on the cancer data set

(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
# Create train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)
# Normalize the data for better performance
X_train1=normalize(X_train)

# Create weight vectors of zeros. The size is the number of features in the data set=30
w=np.zeros((X_train.shape[1],1))
#w=np.zeros((30,1))
b=0

#Normalize the training data so that gradient descent performs better
X_train1=normalize(X_train)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_train2=X_train1.T

# Reshape to remove the rank 1 array and then transpose
y_train1=y_train.reshape(len(y_train),1)
y_train2=y_train1.T

# Run gradient descent for 4000 times and compute the weights
w = parameters["w"]
b = parameters["b"]

# Normalize X_test
X_test1=normalize(X_test)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=X_test1.T

#Reshape y_test
y_test1=y_test.reshape(len(y_test),1)
y_test2=y_test1.T

# Predict the values for
yPredictionTest = predict(w, b, X_test2)
yPredictionTrain = predict(w, b, X_train2)

# Print the accuracy
print("train accuracy: {} %".format(100 - np.mean(np.abs(yPredictionTrain - y_train2)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(yPredictionTest - y_test)) * 100))

# Plot the Costs vs the number of iterations
fig1=plt.plot(idx,costs)
fig1=plt.title("Gradient descent-Cost vs No of iterations")
fig1=plt.xlabel("No of iterations")
fig1=plt.ylabel("Cost")
fig1.figure.savefig("fig1", bbox_inches='tight')
## train accuracy: 90.3755868545 %
## test accuracy: 89.5104895105 %

Note: It can be seen that the Accuracy on the training and test set is 90.37% and 89.51%. This is comparatively poorer than the 96% which the logistic regression of sklearn achieves! But this is mainly because of the absence of hidden layers which is the real power of neural networks.

### 4. Neural Network for Logistic Regression -R code (vectorized)

source("RFunctions-1.R")
# Define the sigmoid function
sigmoid <- function(z){
a <- 1/(1+ exp(-z))
a
}

# Compute the loss
computeLoss <- function(numTraining,Y,A){
loss <- -1/numTraining* sum(Y*log(A) + (1-Y)*log(1-A))
return(loss)
}

# Compute forward propagation
forwardPropagation <- function(w,b,X,Y){
# Compute Z
Z <- t(w) %*% X +b
#Set the number of samples
numTraining <- ncol(X)
# Compute the activation function
A=sigmoid(Z)

#Compute the loss
loss <- computeLoss(numTraining,Y,A)

# Compute the gradients dZ, dw and db
dZ<-A-Y
dw<-1/numTraining * X %*% t(dZ)
db<-1/numTraining*sum(dZ)

fwdProp <- list("loss" = loss, "dw" = dw, "db" = db)
return(fwdProp)
}

# Perform one cycle of Gradient descent
gradientDescent <- function(w, b, X, Y, numIerations, learningRate){
losses <- NULL
idx <- NULL
# Loop through the number of iterations
for(i in 1:numIerations){
fwdProp <-forwardPropagation(w,b,X,Y)
#Get the derivatives
dw <- fwdProp$dw db <- fwdProp$db
w = w-learningRate*dw
b = b-learningRate*db
l <- fwdProp$loss # Stoe the loss if(i %% 100 == 0){ idx <- c(idx,i) losses <- c(losses,l) } } # Return the weights and losses gradDescnt <- list("w"=w,"b"=b,"dw"=dw,"db"=db,"losses"=losses,"idx"=idx) return(gradDescnt) } # Compute the predicted value for input predict <- function(w,b,X){ m=dim(X)[2] # Create a ector of 0's yPredicted=matrix(rep(0,m),nrow=1,ncol=m) Z <- t(w) %*% X +b # Compute sigmoid A=sigmoid(Z) for(i in 1:dim(A)[2]){ # If A > 0.5 set value as 1 if(A[1,i] > 0.5) yPredicted[1,i]=1 else # Else set as 0 yPredicted[1,i]=0 } return(yPredicted) } # Normalize the matrix normalize <- function(x){ #Create the norm of the matrix.Perform the Frobenius norm of the matrix n<-as.matrix(sqrt(rowSums(x^2))) #Sweep by rows by norm. Note '1' in the function which performing on every row normalized<-sweep(x, 1, n, FUN="/") return(normalized) } # Run the 2 layer Neural Network on the cancer data set # Read the data (from sklearn) cancer <- read.csv("cancer.csv") # Rename the target variable names(cancer) <- c(seq(1,30),"output") # Split as training and test sets train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5) train <- cancer[train_idx, ] test <- cancer[-train_idx, ] # Set the features X_train <-train[,1:30] y_train <- train[,31] X_test <- test[,1:30] y_test <- test[,31] # Create a matrix of 0's with the number of features w <-matrix(rep(0,dim(X_train)[2])) b <-0 X_train1 <- normalize(X_train) X_train2=t(X_train1) # Reshape then transpose y_train1=as.matrix(y_train) y_train2=t(y_train1) # Perform gradient descent gradDescent= gradientDescent(w, b, X_train2, y_train2, numIerations=3000, learningRate=0.77) # Normalize X_test X_test1=normalize(X_test) #Transpose X_train so that we have a matrix as (features, numSamples) X_test2=t(X_test1) #Reshape y_test and take transpose y_test1=as.matrix(y_test) y_test2=t(y_test1) # Use the values of the weights generated from Gradient Descent yPredictionTest = predict(gradDescent$w, gradDescent$b, X_test2) yPredictionTrain = predict(gradDescent$w, gradDescent$b, X_train2) sprintf("Train accuracy: %f",(100 - mean(abs(yPredictionTrain - y_train2)) * 100)) ## [1] "Train accuracy: 90.845070" sprintf("test accuracy: %f",(100 - mean(abs(yPredictionTest - y_test)) * 100)) ## [1] "test accuracy: 87.323944" df <-data.frame(gradDescent$idx, gradDescent$losses) names(df) <- c("iterations","losses") ggplot(df,aes(x=iterations,y=losses)) + geom_point() + geom_line(col="blue") + ggtitle("Gradient Descent - Losses vs No of Iterations") + xlab("No of iterations") + ylab("Losses") ### 4. Neural Network for Logistic Regression -Octave code (vectorized)  1; # Define sigmoid function function a = sigmoid(z) a = 1 ./ (1+ exp(-z)); end # Compute the loss function loss=computeLoss(numtraining,Y,A) loss = -1/numtraining * sum((Y .* log(A)) + (1-Y) .* log(1-A)); end  # Perform forward propagation function [loss,dw,db,dZ] = forwardPropagation(w,b,X,Y) % Compute Z Z = w' * X + b; numtraining = size(X)(1,2); # Compute sigmoid A = sigmoid(Z);  #Compute loss. Note this is element wise product loss =computeLoss(numtraining,Y,A); # Compute the gradients dZ, dw and db dZ = A-Y; dw = 1/numtraining* X * dZ'; db =1/numtraining*sum(dZ); end  # Compute Gradient Descent function [w,b,dw,db,losses,index]=gradientDescent(w, b, X, Y, numIerations, learningRate) #Initialize losses and idx losses=[]; index=[]; # Loop through the number of iterations for i=1:numIerations, [loss,dw,db,dZ] = forwardPropagation(w,b,X,Y); # Perform Gradient descent w = w - learningRate*dw; b = b - learningRate*db; if(mod(i,100) ==0) # Append index and loss index = [index i]; losses = [losses loss]; endif end end  # Determine the predicted value for dataset function yPredicted = predict(w,b,X) m = size(X)(1,2); yPredicted=zeros(1,m); # Compute Z Z = w' * X + b; # Compute sigmoid A = sigmoid(Z); for i=1:size(X)(1,2), # Set predicted as 1 if A > 0,5 if(A(1,i) >= 0.5) yPredicted(1,i)=1; else yPredicted(1,i)=0; endif end end  # Normalize by dividing each value by the sum of squares function normalized = normalize(x) # Compute Frobenius norm. Square the elements, sum rows and then find square root a = sqrt(sum(x .^ 2,2)); # Perform element wise division normalized = x ./ a; end  # Split into train and test sets function [X_train,y_train,X_test,y_test] = trainTestSplit(dataset,trainPercent) # Create a random index ix = randperm(length(dataset)); # Split into training trainSize = floor(trainPercent/100 * length(dataset)); train=dataset(ix(1:trainSize),:); # And test test=dataset(ix(trainSize+1:length(dataset)),:); X_train = train(:,1:30); y_train = train(:,31); X_test = test(:,1:30); y_test = test(:,31); end  cancer=csvread("cancer.csv"); [X_train,y_train,X_test,y_test] = trainTestSplit(cancer,75); w=zeros(size(X_train)(1,2),1); b=0; X_train1=normalize(X_train); X_train2=X_train1'; y_train1=y_train'; [w1,b1,dw,db,losses,idx]=gradientDescent(w, b, X_train2, y_train1, numIerations=3000, learningRate=0.75); # Normalize X_test X_test1=normalize(X_test); #Transpose X_train so that we have a matrix as (features, numSamples) X_test2=X_test1'; y_test1=y_test'; # Use the values of the weights generated from Gradient Descent yPredictionTest = predict(w1, b1, X_test2); yPredictionTrain = predict(w1, b1, X_train2);   trainAccuracy=100-mean(abs(yPredictionTrain - y_train1))*100 testAccuracy=100- mean(abs(yPredictionTest - y_test1))*100 trainAccuracy = 90.845 testAccuracy = 89.510 graphics_toolkit('gnuplot') plot(idx,losses); title ('Gradient descent- Cost vs No of iterations'); xlabel ("No of iterations"); ylabel ("Cost"); Conclusion This post starts with a simple 2 layer Neural Network implementation of Logistic Regression. Clearly the performance of this simple Neural Network is comparatively poor to the highly optimized sklearn’s Logistic Regression. This is because the above neural network did not have any hidden layers. Deep Learning & Neural Networks achieve extraordinary performance because of the presence of deep hidden layers The Deep Learning journey has begun… Don’t miss the bus! Stay tuned for more interesting posts in Deep Learning!! To see all posts check Index of posts # The 3rd paperback & kindle editions of my books on Cricket, now on Amazon The 3rd paperback & kindle edition of both my books on cricket is now available on Amazon a) Cricket analytics with cricketr, Third Edition. The paperback edition is$12.99 and the kindle edition is $4.99/Rs320. This book is based on my R package ‘cricketr‘, available on CRAN and uses ESPN Cricinfo Statsguru b) Beaten by sheer pace! Cricket analytics with yorkr, 3rd edition . The paperback is$12.99 and the kindle version is $6.99/Rs448. This is based on my R package ‘yorkr‘ on CRAN and uses data from Cricsheet Pick up your copies today!! Note: In the 3rd edition of the paperback book, the charts will be in black and white. If you would like the charts to be in color, please check out the 2nd edition of these books To see all posts see Index of posts # My book ‘Practical Machine Learning with R and Python’ on Amazon My book ‘Practical Machine Learning with R and Python – Machine Learning in stereo’ is now available in both paperback ($9.99) and kindle ($6.97/Rs449) versions. In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code. This is almost like listening to parallel channels of music in stereo! 1. Practical machine with R and Python – Machine Learning in Stereo (Paperback) 2. Practical machine with R and Python – Machine Learning in Stereo (Kindle) This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better. Here is a look at the topics covered Table of Contents Essential R …………………………………….. 7 Essential Python for Datascience ……………….. 54 R vs Python ……………………………………. 77 Regression of a continuous variable ………………. 96 Classification and Cross Validation ……………….113 Regression techniques and regularization …………. 134 SVMs, Decision Trees and Validation curves …………175 Splines, GAMs, Random Forests and Boosting …………202 PCA, K-Means and Hierarchical Clustering …………. 234 Pick up your copy today!! Hope you have a great time learning as I did while implementing these algorithms! # Practical Machine Learning with R and Python – Part 6 # Introduction This is the final and concluding part of my series on ‘Practical Machine Learning with R and Python’. In this series I included the implementations of the most common Machine Learning algorithms in R and Python. The algorithms implemented were 1. Practical Machine Learning with R and Python – Part 1 In this initial post, I touch upon regression of a continuous target variable. Specifically I touch upon Univariate, Multivariate, Polynomial regression and KNN regression in both R and Python 2. Practical Machine Learning with R and Python – Part 2 In this post, I discuss Logistic Regression, KNN classification and Cross Validation error for both LOOCV and K-Fold in both R and Python 3. Practical Machine Learning with R and Python – Part 3 This 3rd part included feature selection in Machine Learning. Specifically I touch best fit, forward fit, backward fit, ridge(L2 regularization) & lasso (L1 regularization). The post includes equivalent code in R and Python. 4. Practical Machine Learning with R and Python – Part 4 In this part I discussed SVMs, Decision Trees, Validation, Precision-Recall, AUC and ROC curves 5. Practical Machine Learning with R and Python – Part 5 In this penultimate part, I touch upon B-splines, natural splines, smoothing spline, Generalized Additive Models(GAMs), Decision Trees, Random Forests and Gradient Boosted Treess. In this last part I cover Unsupervised Learning. Specifically I cover the implementations of Principal Component Analysis (PCA). K-Means and Heirarchical Clustering. You can download this R Markdown file from Github at MachineLearning-RandPython-Part6 The content of this post and much more is now available as a compact book on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo ## 1.1a Principal Component Analysis (PCA) – R code Principal Component Analysis is used to reduce the dimensionality of the input. In the code below 8 x 8 pixel of handwritten digits is reduced into its principal components. Then a scatter plot of the first 2 principal components give a very good visial representation of the data library(dplyr) library(ggplot2) #Note: This example is adapted from an the example in the book Python Datascience handbook by # Jake VanderPlas (https://jakevdp.github.io/PythonDataScienceHandbook/05.09-principal-component-analysis.html) # Read the digits data (From sklearn datasets) digits= read.csv("digits.csv") # Create a digits classes target variable digitClasses <- factor(digits$X0.000000000000000000e.00.29)

#Invoke the Principal Componsent analysis on columns 1-64
digitsPCA=prcomp(digits[,1:64])

# Create a dataframe of PCA
df <- data.frame(digitsPCA$x) # Bind the digit classes df1 <- cbind(df,digitClasses) # Plot only the first 2 Principal components as a scatter plot. This plot uses only the # first 2 principal components ggplot(df1,aes(x=PC1,y=PC2,col=digitClasses)) + geom_point() + ggtitle("Top 2 Principal Components") ## 1.1 b Variance explained vs no principal components – R code In the code below the variance explained vs the number of principal components is plotted. It can be seen that with 20 Principal components almost 90% of the variance is explained by this reduced dimensional model. # Read the digits data (from sklearn datasets) digits= read.csv("digits.csv") # Digits target digitClasses <- factor(digits$X0.000000000000000000e.00.29)
digitsPCA=prcomp(digits[,1:64])

# Get the Standard Deviation
sd=digitsPCA$sdev # Compute the variance digitsVar=digitsPCA$sdev^2
#Compute the percent variance explained
percentVarExp=digitsVar/sum(digitsVar)

# Plot the percent variance exlained as a function of the  number of principal components
#plot(cumsum(percentVarExp), xlab="Principal Component",
#     ylab="Cumulative Proportion of Variance Explained",
#     main="Principal Components vs % Variance explained",ylim=c(0,1),type='l',lwd=2,
#       col="blue")

## 1.1c Principal Component Analysis (PCA) – Python code

import numpy as np
from sklearn.decomposition import PCA
from sklearn import decomposition
from sklearn import datasets
import matplotlib.pyplot as plt

# Select only the first 2 principal components
pca = PCA(2)  # project from 64 to 2 dimensions
#Compute the first 2 PCA
projected = pca.fit_transform(digits.data)

# Plot a scatter plot of the first 2 principal components
plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('spectral', 10))
plt.xlabel('PCA 1')
plt.ylabel('PCA 2')
plt.colorbar();
plt.title("Top 2 Principal Components")
plt.savefig('fig1.png', bbox_inches='tight')

## – Python code

import numpy as np
from sklearn.decomposition import PCA
from sklearn import decomposition
from sklearn import datasets
import matplotlib.pyplot as plt

# Select all 64 principal components
pca = PCA(64)  # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)

# Obtain the explained variance for each principal component
varianceExp= pca.explained_variance_ratio_
# Compute the total sum of variance
totVarExp=np.cumsum(np.round(pca.explained_variance_ratio_, decimals=4)*100)

# Plot the variance explained as a function of the number of principal components
plt.plot(totVarExp)
plt.xlabel('No of principal components')
plt.ylabel('% variance explained')
plt.title('No of Principal Components vs Total Variance explained')
plt.savefig('fig2.png', bbox_inches='tight')

## 1.2a K-Means – R code

In the code first the scatter plot of the first 2 Principal Components of the handwritten digits is plotted as a scatter plot. Over this plot 10 centroids of the 10 different clusters corresponding the 10 diferent digits is plotted over the original scatter plot.

library(ggplot2)
# Create digit classes target variable
digitClasses <- factor(digits$X0.000000000000000000e.00.29) # Compute the Principal COmponents digitsPCA=prcomp(digits[,1:64]) # Create a data frame of Principal components and the digit classes df <- data.frame(digitsPCA$x)
df1 <- cbind(df,digitClasses)

# Pick only the first 2 principal components
a<- df[,1:2]
# Compute K Means of 10 clusters and allow for 1000 iterations
k<-kmeans(a,10,1000)

# Create a dataframe of the centroids of the clusters
df2<-data.frame(k$centers) #Plot the first 2 principal components with the K Means centroids ggplot(df1,aes(x=PC1,y=PC2,col=digitClasses)) + geom_point() + geom_point(data=df2,aes(x=PC1,y=PC2),col="black",size = 4) + ggtitle("Top 2 Principal Components with KMeans clustering")  ## 1.2b K-Means – Python code The centroids of the 10 different handwritten digits is plotted over the scatter plot of the first 2 principal components. import numpy as np from sklearn.decomposition import PCA from sklearn import decomposition from sklearn import datasets import matplotlib.pyplot as plt from sklearn.datasets import load_digits from sklearn.cluster import KMeans digits = load_digits() # Select only the 1st 2 principal components pca = PCA(2) # project from 64 to 2 dimensions projected = pca.fit_transform(digits.data) # Create 10 different clusters kmeans = KMeans(n_clusters=10) # Compute the clusters kmeans.fit(projected) y_kmeans = kmeans.predict(projected) # Get the cluster centroids centers = kmeans.cluster_centers_ centers #Create a scatter plot of the first 2 principal components plt.scatter(projected[:, 0], projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('spectral', 10)) plt.xlabel('PCA 1') plt.ylabel('PCA 2') plt.colorbar(); # Overlay the centroids on the scatter plot plt.scatter(centers[:, 0], centers[:, 1], c='darkblue', s=100) plt.savefig('fig3.png', bbox_inches='tight') ## 1.3a Heirarchical clusters – R code Herirachical clusters is another type of unsupervised learning. It successively joins the closest pair of objects (points or clusters) in succession based on some ‘distance’ metric. In this type of clustering we do not have choose the number of centroids. We can cut the created dendrogram mat an appropriate height to get a desired and reasonable number of clusters These are the following ‘distance’ metrics used while combining successive objects • Ward • Complete • Single • Average • Centroid # Read the IRIS dataset iris <- datasets::iris iris2 <- iris[,-5] species <- iris[,5] #Compute the distance matrix d_iris <- dist(iris2) # Use the 'average' method to for the clsuters hc_iris <- hclust(d_iris, method = "average") # Plot the clusters plot(hc_iris) # Cut tree into 3 groups sub_grp <- cutree(hc_iris, k = 3) # Number of members in each cluster table(sub_grp) ## sub_grp ## 1 2 3 ## 50 64 36 # Draw rectangles around the clusters rect.hclust(hc_iris, k = 3, border = 2:5) ## 1.3a Heirarchical clusters – Python code from sklearn.datasets import load_iris import matplotlib.pyplot as plt from scipy.cluster.hierarchy import dendrogram, linkage # Load the IRIS data set iris = load_iris() # Generate the linkage matrix using the average method Z = linkage(iris.data, 'average') #Plot the dendrogram #dendrogram(Z) #plt.xlabel('Data') #plt.ylabel('Distance') #plt.suptitle('Samples clustering', fontweight='bold', fontsize=14); #plt.savefig('fig4.png', bbox_inches='tight') # Conclusion This is the last and concluding part of my series on Practical Machine Learning with R and Python. These parallel implementations of R and Python can be used as a quick reference while working on a large project. A person who is adept in one of the languages R or Python, can quickly absorb code in the other language. Hope you find this series useful! More interesting things to come. Watch this space! References 1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford 2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera To see all posts see ‘Index of posts # Practical Machine Learning with R and Python – Part 5 This is the 5th and probably penultimate part of my series on ‘Practical Machine Learning with R and Python’. The earlier parts of this series included 1. Practical Machine Learning with R and Python – Part 1 In this initial post, I touch upon univariate, multivariate, polynomial regression and KNN regression in R and Python 2.Practical Machine Learning with R and Python – Part 2 In this post, I discuss Logistic Regression, KNN classification and cross validation error for both LOOCV and K-Fold in both R and Python 3.Practical Machine Learning with R and Python – Part 3 This post covered ‘feature selection’ in Machine Learning. Specifically I touch best fit, forward fit, backward fit, ridge(L2 regularization) & lasso (L1 regularization). The post includes equivalent code in R and Python. 4.Practical Machine Learning with R and Python – Part 4 In this part I discussed SVMs, Decision Trees, validation, precision recall, and roc curves This post ‘Practical Machine Learning with R and Python – Part 5’ discusses regression with B-splines, natural splines, smoothing splines, generalized additive models (GAMS), bagging, random forest and boosting As with my previous posts in this series, this post is largely based on the following 2 MOOC courses 1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford 2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera You can download this R Markdown file and associated data files from Github at MachineLearning-RandPython-Part5 The content of this post and much more is now available as a compact book on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo For this part I have used the data sets from UCI Machine Learning repository(Communities and Crime and Auto MPG) ## 1. Splines When performing regression (continuous or logistic) between a target variable and a feature (or a set of features), a single polynomial for the entire range of the data set usually does not perform a good fit.Rather we would need to provide we could fit regression curves for different section of the data set. There are several techniques which do this for e.g. piecewise-constant functions, piecewise-linear functions, piecewise-quadratic/cubic/4th order polynomial functions etc. One such set of functions are the cubic splines which fit cubic polynomials to successive sections of the dataset. The points where the cubic splines join, are called ‘knots’. Since each section has a different cubic spline, there could be discontinuities (or breaks) at these knots. To prevent these discontinuities ‘natural splines’ and ‘smoothing splines’ ensure that the seperate cubic functions have 2nd order continuity at these knots with the adjacent splines. 2nd order continuity implies that the value, 1st order derivative and 2nd order derivative at these knots are equal. A cubic spline with knots $\alpha_{k}$ , k=1,2,3,..K is a piece-wise cubic polynomial with continuous derivative up to order 2 at each knot. We can write $y_{i} = \beta_{0} +\beta_{1}b_{1}(x_{i}) +\beta_{2}b_{2}(x_{i}) + .. + \beta_{K+3}b_{K+3}(x_{i}) + \epsilon_{i}$. For each ($x{i},y{i}$), $b_{i}$ are called ‘basis’ functions, where $b_{1}(x_{i})=x_{i}$$b_{2}(x_{i})=x_{i}^2$, $b_{3}(x_{i})=x_{i}^3$, $b_{k+3}(x_{i})=(x_{i} -\alpha_{k})^3$ where k=1,2,3… K The 1st and 2nd derivatives of cubic splines are continuous at the knots. Hence splines provide a smooth continuous fit to the data by fitting different splines to different sections of the data ## 1.1a Fit a 4th degree polynomial – R code In the code below a non-linear function (a 4th order polynomial) is used to fit the data. Usually when we fit a single polynomial to the entire data set the tails of the fit tend to vary a lot particularly if there are fewer points at the ends. Splines help in reducing this variation at the extremities library(dplyr) library(ggplot2) source('RFunctions-1.R') # Read the data df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI df1 <- as.data.frame(sapply(df,as.numeric)) #Select specific columns df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg) auto <- df2[complete.cases(df2),] # Fit a 4th degree polynomial fit=lm(mpg~poly(horsepower,4),data=auto) #Display a summary of fit summary(fit) ## ## Call: ## lm(formula = mpg ~ poly(horsepower, 4), data = auto) ## ## Residuals: ## Min 1Q Median 3Q Max ## -14.8820 -2.5802 -0.1682 2.2100 16.1434 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 23.4459 0.2209 106.161 <2e-16 *** ## poly(horsepower, 4)1 -120.1377 4.3727 -27.475 <2e-16 *** ## poly(horsepower, 4)2 44.0895 4.3727 10.083 <2e-16 *** ## poly(horsepower, 4)3 -3.9488 4.3727 -0.903 0.367 ## poly(horsepower, 4)4 -5.1878 4.3727 -1.186 0.236 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4.373 on 387 degrees of freedom ## Multiple R-squared: 0.6893, Adjusted R-squared: 0.6861 ## F-statistic: 214.7 on 4 and 387 DF, p-value: < 2.2e-16 #Get the range of horsepower hp <- range(auto$horsepower)
#Create a sequence to be used for plotting
hpGrid <- seq(hp[1],hp[2],by=10)
#Predict for these values of horsepower. Set Standard error as TRUE
pred=predict(fit,newdata=list(horsepower=hpGrid),se=TRUE)
#Compute bands on either side that is 2xSE
seBands=cbind(pred$fit+2*pred$se.fit,pred$fit-2*pred$se.fit)
#Plot the fit with Standard Error bands
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
ylab="MPG", main="Polynomial of degree 4")
lines(hpGrid,pred$fit,lwd=2,col="blue") matlines(hpGrid,seBands,lwd=2,col="blue",lty=3) ## 1.1b Fit a 4th degree polynomial – Python code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression #Read the auto data autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1") # Select columns autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']] # Convert all columns to numeric autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce') #Drop NAs autoDF3=autoDF2.dropna() autoDF3.shape X=autoDF3[['horsepower']] y=autoDF3['mpg'] #Create a polynomial of degree 4 poly = PolynomialFeatures(degree=4) X_poly = poly.fit_transform(X) # Fit a polynomial regression line linreg = LinearRegression().fit(X_poly, y) # Create a range of values hpGrid = np.arange(np.min(X),np.max(X),10) hp=hpGrid.reshape(-1,1) # Transform to 4th degree poly = PolynomialFeatures(degree=4) hp_poly = poly.fit_transform(hp) #Create a scatter plot plt.scatter(X,y) # Fit the prediction ypred=linreg.predict(hp_poly) plt.title("Poylnomial of degree 4") fig2=plt.xlabel("Horsepower") fig2=plt.ylabel("MPG") # Draw the regression curve plt.plot(hp,ypred,c="red") plt.savefig('fig1.png', bbox_inches='tight') ## 1.1c Fit a B-Spline – R Code In the code below a B- Spline is fit to data. The B-spline requires the manual selection of knots #Splines library(splines) # Fit a B-spline to the data. Select knots at 60,75,100,150 fit=lm(mpg~bs(horsepower,df=6,knots=c(60,75,100,150)),data=auto) # Use the fitted regresion to predict pred=predict(fit,newdata=list(horsepower=hpGrid),se=T) # Create a scatter plot plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower", ylab="MPG", main="B-Spline with 4 knots") #Draw lines with 2 Standard Errors on either side lines(hpGrid,pred$fit,lwd=2)
lines(hpGrid,pred$fit+2*pred$se,lty="dashed")
lines(hpGrid,pred$fit-2*pred$se,lty="dashed")
abline(v=c(60,75,100,150),lty=2,col="darkgreen")

## 1.1d Fit a Natural Spline – R Code

Here a ‘Natural Spline’ is used to fit .The Natural Spline extrapolates beyond the boundary knots and the ends of the function are much more constrained than a regular spline or a global polynomoial where the ends can wag a lot more. Natural splines do not require the explicit selection of knots

# There is no need to select the knots here. There is a smoothing parameter which
# can be specified by the degrees of freedom 'df' parameter. The natural spline

fit2=lm(mpg~ns(horsepower,df=4),data=auto)
pred=predict(fit2,newdata=list(horsepower=hpGrid),se=T)
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
ylab="MPG", main="Natural Splines")
lines(hpGrid,pred$fit,lwd=2) lines(hpGrid,pred$fit+2*pred$se,lty="dashed") lines(hpGrid,pred$fit-2*pred$se,lty="dashed") ## 1.1.e Fit a Smoothing Spline – R code Here a smoothing spline is used. Smoothing splines also do not require the explicit setting of knots. We can change the ‘degrees of freedom(df)’ paramater to get the best fit # Smoothing spline has a smoothing parameter, the degrees of freedom # This is too wiggly plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower", ylab="MPG", main="Smoothing Splines") # Here df is set to 16. This has a lot of variance fit=smooth.spline(auto$horsepower,auto$mpg,df=16) lines(fit,col="red",lwd=2) # We can use Cross Validation to allow the spline to pick the value of this smpopothing paramter. We do not need to set the degrees of freedom 'df' fit=smooth.spline(auto$horsepower,auto$mpg,cv=TRUE) lines(fit,col="blue",lwd=2) ## 1.1e Splines – Python There isn’t as much treatment of splines in Python and SKLearn. I did find the LSQUnivariate, UnivariateSpline spline. The LSQUnivariate spline requires the explcit setting of knots import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from scipy.interpolate import LSQUnivariateSpline autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1") autoDF.shape autoDF.columns autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']] autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce') auto=autoDF2.dropna() auto=auto[['horsepower','mpg']].sort_values('horsepower') # Set the knots manually knots=[65,75,100,150] # Create an array for X & y X=np.array(auto['horsepower']) y=np.array(auto['mpg']) # Fit a LSQunivariate spline s = LSQUnivariateSpline(X,y,knots) #Plot the spline xs = np.linspace(40,230,1000) ys = s(xs) plt.scatter(X, y) plt.plot(xs, ys) plt.savefig('fig2.png', bbox_inches='tight')  ## 1.2 Generalized Additiive models (GAMs) Generalized Additive Models (GAMs) is a really powerful ML tool. $y_{i} = \beta_{0} + f_{1}(x_{i1}) + f_{2}(x_{i2}) + .. +f_{p}(x_{ip}) + \epsilon_{i}$ In GAMs we use a different functions for each of the variables. GAMs give a much better fit since we can choose any function for the different sections ## 1.2a Generalized Additive Models (GAMs) – R Code The plot below show the smooth spline that is fit for each of the features horsepower, cylinder, displacement, year and acceleration. We can use any function for example loess, 4rd order polynomial etc. library(gam) # Fit a smoothing spline for horsepower, cyliner, displacement and acceleration gam=gam(mpg~s(horsepower,4)+s(cylinder,5)+s(displacement,4)+s(year,4)+s(acceleration,5),data=auto) # Display the summary of the fit. This give the significance of each of the paramwetr # Also an ANOVA is given for each combination of the features summary(gam) ## ## Call: gam(formula = mpg ~ s(horsepower, 4) + s(cylinder, 5) + s(displacement, ## 4) + s(year, 4) + s(acceleration, 5), data = auto) ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -8.3190 -1.4436 -0.0261 1.2279 12.0873 ## ## (Dispersion Parameter for gaussian family taken to be 6.9943) ## ## Null Deviance: 23818.99 on 391 degrees of freedom ## Residual Deviance: 2587.881 on 370 degrees of freedom ## AIC: 1898.282 ## ## Number of Local Scoring Iterations: 3 ## ## Anova for Parametric Effects ## Df Sum Sq Mean Sq F value Pr(>F) ## s(horsepower, 4) 1 15632.8 15632.8 2235.085 < 2.2e-16 *** ## s(cylinder, 5) 1 508.2 508.2 72.666 3.958e-16 *** ## s(displacement, 4) 1 374.3 374.3 53.514 1.606e-12 *** ## s(year, 4) 1 2263.2 2263.2 323.583 < 2.2e-16 *** ## s(acceleration, 5) 1 372.4 372.4 53.246 1.809e-12 *** ## Residuals 370 2587.9 7.0 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Anova for Nonparametric Effects ## Npar Df Npar F Pr(F) ## (Intercept) ## s(horsepower, 4) 3 13.825 1.453e-08 *** ## s(cylinder, 5) 3 17.668 9.712e-11 *** ## s(displacement, 4) 3 44.573 < 2.2e-16 *** ## s(year, 4) 3 23.364 7.183e-14 *** ## s(acceleration, 5) 4 3.848 0.004453 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 par(mfrow=c(2,3)) plot(gam,se=TRUE) ## 1.2b Generalized Additive Models (GAMs) – Python Code I did not find the equivalent of GAMs in SKlearn in Python. There was an early prototype (2012) in Github. Looks like it is still work in progress or has probably been abandoned. ## 1.3 Tree based Machine Learning Models Tree based Machine Learning are all based on the ‘bootstrapping’ technique. In bootstrapping given a sample of size N, we create datasets of size N by sampling this original dataset with replacement. Machine Learning models are built on the different bootstrapped samples and then averaged. Decision Trees as seen above have the tendency to overfit. There are several techniques that help to avoid this namely a) Bagging b) Random Forests c) Boosting ### Bagging, Random Forest and Gradient Boosting Bagging: Bagging, or Bootstrap Aggregation decreases the variance of predictions, by creating separate Decisiion Tree based ML models on the different samples and then averaging these ML models Random Forests: Bagging is a greedy algorithm and tries to produce splits based on all variables which try to minimize the error. However the different ML models have a high correlation. Random Forests remove this shortcoming, by using a variable and random set of features to split on. Hence the features chosen and the resulting trees are uncorrelated. When these ML models are averaged the performance is much better. Boosting: Gradient Boosted Decision Trees also use an ensemble of trees but they don’t build Machine Learning models with random set of features at each step. Rather small and simple trees are built. Successive trees try to minimize the error from the earlier trees. Out of Bag (OOB) Error: In Random Forest and Gradient Boosting for each bootstrap sample taken from the dataset, there will be samples left out. These are known as Out of Bag samples.Classification accuracy carried out on these OOB samples is known as OOB error ## 1.31a Decision Trees – R Code The code below creates a Decision tree with the cancer training data. The summary of the fit is output. Based on the ML model, the predict function is used on test data and a confusion matrix is output. # Read the cancer data library(tree) library(caret) library(e1071) cancer <- read.csv("cancer.csv",stringsAsFactors = FALSE) cancer <- cancer[,2:32] cancer$target <- as.factor(cancer$target) train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5) train <- cancer[train_idx, ] test <- cancer[-train_idx, ] # Create Decision Tree cancerStatus=tree(target~.,train) summary(cancerStatus) ## ## Classification tree: ## tree(formula = target ~ ., data = train) ## Variables actually used in tree construction: ## [1] "worst.perimeter" "worst.concave.points" "area.error" ## [4] "worst.texture" "mean.texture" "mean.concave.points" ## Number of terminal nodes: 9 ## Residual mean deviance: 0.1218 = 50.8 / 417 ## Misclassification error rate: 0.02347 = 10 / 426 pred <- predict(cancerStatus,newdata=test,type="class") confusionMatrix(pred,test$target)
## Confusion Matrix and Statistics
##
##           Reference
## Prediction  0  1
##          0 49  7
##          1  8 78
##
##                Accuracy : 0.8944
##                  95% CI : (0.8318, 0.9397)
##     No Information Rate : 0.5986
##     P-Value [Acc > NIR] : 4.641e-15
##
##                   Kappa : 0.7795
##  Mcnemar's Test P-Value : 1
##
##             Sensitivity : 0.8596
##             Specificity : 0.9176
##          Pos Pred Value : 0.8750
##          Neg Pred Value : 0.9070
##              Prevalence : 0.4014
##          Detection Rate : 0.3451
##    Detection Prevalence : 0.3944
##       Balanced Accuracy : 0.8886
##
##        'Positive' Class : 0
## 
# Plot decision tree with labels
plot(cancerStatus)
text(cancerStatus,pretty=0)

## 1.31b Decision Trees – Cross Validation – R Code

We can also perform a Cross Validation on the data to identify the Decision Tree which will give the minimum deviance.

library(tree)
cancer <- cancer[,2:32]
cancer$target <- as.factor(cancer$target)
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

# Create Decision Tree
cancerStatus=tree(target~.,train)

# Execute 10 fold cross validation
cvCancer=cv.tree(cancerStatus)
plot(cvCancer)

# Plot the
plot(cvCancer$size,cvCancer$dev,type='b')

prunedCancer=prune.tree(cancerStatus,best=4)
plot(prunedCancer)
text(prunedCancer,pretty=0)

pred <- predict(prunedCancer,newdata=test,type="class")
confusionMatrix(pred,test$target) ## Confusion Matrix and Statistics ## ## Reference ## Prediction 0 1 ## 0 50 7 ## 1 7 78 ## ## Accuracy : 0.9014 ## 95% CI : (0.8401, 0.945) ## No Information Rate : 0.5986 ## P-Value [Acc > NIR] : 7.988e-16 ## ## Kappa : 0.7948 ## Mcnemar's Test P-Value : 1 ## ## Sensitivity : 0.8772 ## Specificity : 0.9176 ## Pos Pred Value : 0.8772 ## Neg Pred Value : 0.9176 ## Prevalence : 0.4014 ## Detection Rate : 0.3521 ## Detection Prevalence : 0.4014 ## Balanced Accuracy : 0.8974 ## ## 'Positive' Class : 0 ##  ## 1.31c Decision Trees – Python Code Below is the Python code for creating Decision Trees. The accuracy, precision, recall and F1 score is computed on the test data set. import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix from sklearn import tree from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import make_classification, make_blobs from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score import graphviz cancer = load_breast_cancer() (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) clf = DecisionTreeClassifier().fit(X_train, y_train) print('Accuracy of Decision Tree classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Decision Tree classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) y_predicted=clf.predict(X_test) confusion = confusion_matrix(y_test, y_predicted) print('Accuracy: {:.2f}'.format(accuracy_score(y_test, y_predicted))) print('Precision: {:.2f}'.format(precision_score(y_test, y_predicted))) print('Recall: {:.2f}'.format(recall_score(y_test, y_predicted))) print('F1: {:.2f}'.format(f1_score(y_test, y_predicted))) # Plot the Decision Tree clf = DecisionTreeClassifier(max_depth=2).fit(X_train, y_train) dot_data = tree.export_graphviz(clf, out_file=None, feature_names=cancer.feature_names, class_names=cancer.target_names, filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph ## Accuracy of Decision Tree classifier on training set: 1.00 ## Accuracy of Decision Tree classifier on test set: 0.87 ## Accuracy: 0.87 ## Precision: 0.97 ## Recall: 0.82 ## F1: 0.89 ## 1.31d Decision Trees – Cross Validation – Python Code In the code below 5-fold cross validation is performed for different depths of the tree and the accuracy is computed. The accuracy on the test set seems to plateau when the depth is 8. But it is seen to increase again from 10 to 12. More analysis needs to be done here  import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.tree import DecisionTreeClassifier (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) from sklearn.cross_validation import train_test_split, KFold def computeCVAccuracy(X,y,folds): accuracy=[] foldAcc=[] depth=[1,2,3,4,5,6,7,8,9,10,11,12] nK=len(X)/float(folds) xval_err=0 for i in depth: kf = KFold(len(X),n_folds=folds) for train_index, test_index in kf: X_train, X_test = X.iloc[train_index], X.iloc[test_index] y_train, y_test = y.iloc[train_index], y.iloc[test_index] clf = DecisionTreeClassifier(max_depth = i).fit(X_train, y_train) score=clf.score(X_test, y_test) accuracy.append(score) foldAcc.append(np.mean(accuracy)) return(foldAcc) cvAccuracy=computeCVAccuracy(pd.DataFrame(X_cancer),pd.DataFrame(y_cancer),folds=10) df1=pd.DataFrame(cvAccuracy) df1.columns=['cvAccuracy'] df=df1.reindex([1,2,3,4,5,6,7,8,9,10,11,12]) df.plot() plt.title("Decision Tree - 10-fold Cross Validation Accuracy vs Depth of tree") plt.xlabel("Depth of tree") plt.ylabel("Accuracy") plt.savefig('fig3.png', bbox_inches='tight') ## 1.4a Random Forest – R code A Random Forest is fit using the Boston data. The summary shows that 4 variables were randomly chosen at each split and the resulting ML model explains 88.72% of the test data. Also the variable importance is plotted. It can be seen that ‘rooms’ and ‘status’ are the most influential features in the model library(randomForest) df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL # Select specific columns Boston <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color", "status","medianValue") # Fit a Random Forest on the Boston training data rfBoston=randomForest(medianValue~.,data=Boston) # Display the summatu of the fit. It can be seen that the MSE is 10.88 # and the percentage variance explained is 86.14%. About 4 variables were tried at each # #split for a maximum tree of 500. # The MSE and percent variance is on Out of Bag trees rfBoston ## ## Call: ## randomForest(formula = medianValue ~ ., data = Boston) ## Type of random forest: regression ## Number of trees: 500 ## No. of variables tried at each split: 4 ## ## Mean of squared residuals: 9.521672 ## % Var explained: 88.72 #List and plot the variable importances importance(rfBoston) ## IncNodePurity ## crimeRate 2602.1550 ## zone 258.8057 ## indus 2599.6635 ## charles 240.2879 ## nox 2748.8485 ## rooms 12011.6178 ## age 1083.3242 ## distances 2432.8962 ## highways 393.5599 ## tax 1348.6987 ## teacherRatio 2841.5151 ## color 731.4387 ## status 12735.4046 varImpPlot(rfBoston) ## 1.4b Random Forest-OOB and Cross Validation Error – R code The figure below shows the OOB error and the Cross Validation error vs the ‘mtry’. Here mtry indicates the number of random features that are chosen at each split. The lowest test error occurs when mtry = 8 library(randomForest) df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL # Select specific columns Boston <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color", "status","medianValue") # Split as training and tst sets train_idx <- trainTestSplit(Boston,trainPercent=75,seed=5) train <- Boston[train_idx, ] test <- Boston[-train_idx, ] #Initialize OOD and testError oobError <- NULL testError <- NULL # In the code below the number of variables to consider at each split is increased # from 1 - 13(max features) and the OOB error and the MSE is computed for(i in 1:13){ fitRF=randomForest(medianValue~.,data=train,mtry=i,ntree=400) oobError[i] <-fitRF$mse[400]
pred <- predict(fitRF,newdata=test)
testError[i] <- mean((pred-test$medianValue)^2) } # We can see the OOB and Test Error. It can be seen that the Random Forest performs # best with the lowers MSE at mtry=6 matplot(1:13,cbind(testError,oobError),pch=19,col=c("red","blue"), type="b",xlab="mtry(no of varaibles at each split)", ylab="Mean Squared Error", main="Random Forest - OOB and Test Error") legend("topright",legend=c("OOB","Test"),pch=19,col=c("red","blue")) ## 1.4c Random Forest – Python code The python code for Random Forest Regression is shown below. The training and test score is computed. The variable importance shows that ‘rooms’ and ‘status’ are the most influential of the variables import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax', 'teacherRatio','color','status']] y=df['medianValue'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) regr = RandomForestRegressor(max_depth=4, random_state=0) regr.fit(X_train, y_train) print('R-squared score (training): {:.3f}' .format(regr.score(X_train, y_train))) print('R-squared score (test): {:.3f}' .format(regr.score(X_test, y_test))) feature_names=['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax', 'teacherRatio','color','status'] print(regr.feature_importances_) plt.figure(figsize=(10,6),dpi=80) c_features=X_train.shape[1] plt.barh(np.arange(c_features),regr.feature_importances_) plt.xlabel("Feature importance") plt.ylabel("Feature name") plt.yticks(np.arange(c_features), feature_names) plt.tight_layout() plt.savefig('fig4.png', bbox_inches='tight')  ## R-squared score (training): 0.917 ## R-squared score (test): 0.734 ## [ 0.03437382 0. 0.00580335 0. 0.00731004 0.36461548 ## 0.00638577 0.03432173 0.0041244 0.01732328 0.01074148 0.0012638 ## 0.51373683] ## 1.4d Random Forest – Cross Validation and OOB Error – Python code As with R the ‘max_features’ determines the random number of features the random forest will use at each split. The plot shows that when max_features=8 the MSE is lowest import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import cross_val_score df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax', 'teacherRatio','color','status']] y=df['medianValue'] cvError=[] oobError=[] oobMSE=[] for i in range(1,13): regr = RandomForestRegressor(max_depth=4, n_estimators=400,max_features=i,oob_score=True,random_state=0) mse= np.mean(cross_val_score(regr, X, y, cv=5,scoring = 'neg_mean_squared_error')) # Since this is neg_mean_squared_error I have inverted the sign to get MSE cvError.append(-mse) # Fit on all data to compute OOB error regr.fit(X, y) # Record the OOB error for each max_features=i setting oob = 1 - regr.oob_score_ oobError.append(oob) # Get the Out of Bag prediction oobPred=regr.oob_prediction_ # Compute the Mean Squared Error between OOB Prediction and target mseOOB=np.mean(np.square(oobPred-y)) oobMSE.append(mseOOB) # Plot the CV Error and OOB Error # Set max_features maxFeatures=np.arange(1,13) cvError=pd.DataFrame(cvError,index=maxFeatures) oobMSE=pd.DataFrame(oobMSE,index=maxFeatures) #Plot fig8=df.plot() fig8=plt.title('Random forest - CV Error and OOB Error vs max_features') fig8.figure.savefig('fig8.png', bbox_inches='tight')  #Plot the OOB Error vs max_features plt.plot(range(1,13),oobError) fig2=plt.title("Random Forest - OOB Error vs max_features (variable no of features)") fig2=plt.xlabel("max_features (variable no of features)") fig2=plt.ylabel("OOB Error") fig2.figure.savefig('fig7.png', bbox_inches='tight')  ## 1.5a Boosting – R code Here a Gradient Boosted ML Model is built with a n.trees=5000, with a learning rate of 0.01 and depth of 4. The feature importance plot also shows that rooms and status are the 2 most important features. The MSE vs the number of trees plateaus around 2000 trees library(gbm) # Perform gradient boosting on the Boston data set. The distribution is gaussian since we # doing MSE. The interaction depth specifies the number of splits boostBoston=gbm(medianValue~.,data=train,distribution="gaussian",n.trees=5000, shrinkage=0.01,interaction.depth=4) #The summary gives the variable importance. The 2 most significant variables are # number of rooms and lower status summary(boostBoston) ## var rel.inf ## rooms rooms 42.2267200 ## status status 27.3024671 ## distances distances 7.9447972 ## crimeRate crimeRate 5.0238827 ## nox nox 4.0616548 ## teacherRatio teacherRatio 3.1991999 ## age age 2.7909772 ## color color 2.3436295 ## tax tax 2.1386213 ## charles charles 1.3799109 ## highways highways 0.7644026 ## indus indus 0.7236082 ## zone zone 0.1001287 # The plots below show how each variable relates to the median value of the home. As # the number of roomd increase the median value increases and with increase in lower status # the median value decreases par(mfrow=c(1,2)) #Plot the relation between the top 2 features and the target plot(boostBoston,i="rooms") plot(boostBoston,i="status") # Create a sequence of trees between 100-5000 incremented by 50 nTrees=seq(100,5000,by=50) # Predict the values for the test data pred <- predict(boostBoston,newdata=test,n.trees=nTrees) # Compute the mean for each of the MSE for each of the number of trees boostError <- apply((pred-test$medianValue)^2,2,mean)
#Plot the MSE vs the number of trees
plot(nTrees,boostError,pch=19,col="blue",ylab="Mean Squared Error",
main="Boosting Test Error")

## 1.5b Cross Validation Boosting – R code

Included below is a cross validation error vs the learning rate. The lowest error is when learning rate = 0.09

cvError <- NULL
s <- c(.001,0.01,0.03,0.05,0.07,0.09,0.1)
for(i in seq_along(s)){
cvBoost=gbm(medianValue~.,data=train,distribution="gaussian",n.trees=5000,
shrinkage=s[i],interaction.depth=4,cv.folds=5)
cvError[i] <- mean(cvBoost$cv.error) } # Create a data frame for plotting a <- rbind(s,cvError) b <- as.data.frame(t(a)) # It can be seen that a shrinkage parameter of 0,05 gives the lowes CV Error ggplot(b,aes(s,cvError)) + geom_point() + geom_line(color="blue") + xlab("Shrinkage") + ylab("Cross Validation Error") + ggtitle("Gradient boosted trees - Cross Validation error vs Shrinkage") ## 1.5c Boosting – Python code A gradient boost ML model in Python is created below. The Rsquared score is computed on the training and test data. import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.ensemble import GradientBoostingRegressor df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax', 'teacherRatio','color','status']] y=df['medianValue'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) regr = GradientBoostingRegressor() regr.fit(X_train, y_train) print('R-squared score (training): {:.3f}' .format(regr.score(X_train, y_train))) print('R-squared score (test): {:.3f}' .format(regr.score(X_test, y_test))) ## R-squared score (training): 0.983 ## R-squared score (test): 0.821 ## 1.5c Cross Validation Boosting – Python code the cross validation error is computed as the learning rate is varied. The minimum CV eror occurs when lr = 0.04 import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import GradientBoostingRegressor from sklearn.model_selection import cross_val_score df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax', 'teacherRatio','color','status']] y=df['medianValue'] cvError=[] learning_rate =[.001,0.01,0.03,0.05,0.07,0.09,0.1] for lr in learning_rate: regr = GradientBoostingRegressor(max_depth=4, n_estimators=400,learning_rate =lr,random_state=0) mse= np.mean(cross_val_score(regr, X, y, cv=10,scoring = 'neg_mean_squared_error')) # Since this is neg_mean_squared_error I have inverted the sign to get MSE cvError.append(-mse) learning_rate =[.001,0.01,0.03,0.05,0.07,0.09,0.1] plt.plot(learning_rate,cvError) plt.title("Gradient Boosting - 5-fold CV- Mean Squared Error vs max_features (variable no of features)") plt.xlabel("max_features (variable no of features)") plt.ylabel("Mean Squared Error") plt.savefig('fig6.png', bbox_inches='tight') Conclusion This post covered Splines and Tree based ML models like Bagging, Random Forest and Boosting. Stay tuned for further updates. You may also like To see all posts see Index of posts # Practical Machine Learning with R and Python – Part 4 This is the 4th installment of my ‘Practical Machine Learning with R and Python’ series. In this part I discuss classification with Support Vector Machines (SVMs), using both a Linear and a Radial basis kernel, and Decision Trees. Further, a closer look is taken at some of the metrics associated with binary classification, namely accuracy vs precision and recall. I also touch upon Validation curves, Precision-Recall, ROC curves and AUC with equivalent code in R and Python This post is a continuation of my 3 earlier posts on Practical Machine Learning in R and Python 1. Practical Machine Learning with R and Python – Part 1 2. Practical Machine Learning with R and Python – Part 2 3. Practical Machine Learning with R and Python – Part 3 The RMarkdown file with the code and the associated data files can be downloaded from Github at MachineLearning-RandPython-Part4 The content of this post and much more is now available as a compact book on Amazon in both formats – as Paperback ($9.99) and a Kindle version(6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo Support Vector Machines (SVM) are another useful Machine Learning model that can be used for both regression and classification problems. SVMs used in classification, compute the hyperplane, that separates the 2 classes with the maximum margin. To do this the features may be transformed into a larger multi-dimensional feature space. SVMs can be used with different kernels namely linear, polynomial or radial basis to determine the best fitting model for a given classification problem. In the 2nd part of this series Practical Machine Learning with R and Python – Part 2, I had mentioned the various metrics that are used in classification ML problems namely Accuracy, Precision, Recall and F1 score. Accuracy gives the fraction of data that were correctly classified as belonging to the +ve or -ve class. However ‘accuracy’ in itself is not a good enough measure because it does not take into account the fraction of the data that were incorrectly classified. This issue becomes even more critical in different domains. For e.g a surgeon who would like to detect cancer, would like to err on the side of caution, and classify even a possibly non-cancerous patient as possibly having cancer, rather than mis-classifying a malignancy as benign. Here we would like to increase recall or sensitivity which is given by Recall= TP/(TP+FN) or we try reduce mis-classification by either increasing the (true positives) TP or reducing (false negatives) FN On the other hand, search algorithms would like to increase precision which tries to reduce the number of irrelevant results in the search result. Precision= TP/(TP+FP). In other words we do not want ‘false positives’ or irrelevant results to come in the search results and there is a need to reduce the false positives. When we try to increase ‘precision’, we do so at the cost of ‘recall’, and vice-versa. I found this diagram and explanation in Wikipedia very useful Source: Wikipedia “Consider a brain surgeon tasked with removing a cancerous tumor from a patient’s brain. The surgeon needs to remove all of the tumor cells since any remaining cancer cells will regenerate the tumor. Conversely, the surgeon must not remove healthy brain cells since that would leave the patient with impaired brain function. The surgeon may be more liberal in the area of the brain she removes to ensure she has extracted all the cancer cells. This decision increases recall but reduces precision. On the other hand, the surgeon may be more conservative in the brain she removes to ensure she extracts only cancer cells. This decision increases precision but reduces recall. That is to say, greater recall increases the chances of removing healthy cells (negative outcome) and increases the chances of removing all cancer cells (positive outcome). Greater precision decreases the chances of removing healthy cells (positive outcome) but also decreases the chances of removing all cancer cells (negative outcome).” ## 1.1a. Linear SVM – R code In R code below I use SVM with linear kernel source('RFunctions-1.R') library(dplyr) library(e1071) library(caret) library(reshape2) library(ggplot2) # Read data. Data from SKLearn cancer <- read.csv("cancer.csv") cancertarget <- as.factor(cancer$target) # Split into training and test sets train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5) train <- cancer[train_idx, ] test <- cancer[-train_idx, ] # Fit a linear basis kernel. DO not scale the data svmfit=svm(target~., data=train, kernel="linear",scale=FALSE) ypred=predict(svmfit,test) #Print a confusion matrix confusionMatrix(ypred,test$target)
## Confusion Matrix and Statistics
##
##           Reference
## Prediction  0  1
##          0 54  3
##          1  3 82
##
##                Accuracy : 0.9577
##                  95% CI : (0.9103, 0.9843)
##     No Information Rate : 0.5986
##     P-Value [Acc > NIR] : <2e-16
##
##                   Kappa : 0.9121
##  Mcnemar's Test P-Value : 1
##
##             Sensitivity : 0.9474
##             Specificity : 0.9647
##          Pos Pred Value : 0.9474
##          Neg Pred Value : 0.9647
##              Prevalence : 0.4014
##          Detection Rate : 0.3803
##    Detection Prevalence : 0.4014
##       Balanced Accuracy : 0.9560
##
##        'Positive' Class : 0
## 

## 1.1b Linear SVM – Python code

The code below creates a SVM with linear basis in Python and also dumps the corresponding classification metrics

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC

from sklearn.datasets import make_classification, make_blobs

from sklearn.metrics import confusion_matrix
from matplotlib.colors import ListedColormap
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)
clf = LinearSVC().fit(X_train, y_train)
print('Breast cancer dataset')
print('Accuracy of Linear SVC classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Linear SVC classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
## Breast cancer dataset
## Accuracy of Linear SVC classifier on training set: 0.92
## Accuracy of Linear SVC classifier on test set: 0.94

## 1.2 Dummy classifier

Often when we perform classification tasks using any ML model namely logistic regression, SVM, neural networks etc. it is very useful to determine how well the ML model performs agains at dummy classifier. A dummy classifier uses some simple computation like frequency of majority class, instead of fitting and ML model. It is essential that our ML model does much better that the dummy classifier. This problem is even more important in imbalanced classes where we have only about 10% of +ve samples. If any ML model we create has a accuracy of about 0.90 then it is evident that our classifier is not doing any better than a dummy classsfier which can just take a majority count of this imbalanced class and also come up with 0.90. We need to be able to do better than that.

In the examples below (1.3a & 1.3b) it can be seen that SVMs with ‘radial basis’ kernel with unnormalized data, for both R and Python, do not perform any better than the dummy classifier.

## 1.2a Dummy classifier – R code

R does not seem to have an explicit dummy classifier. I created a simple dummy classifier that predicts the majority class. SKlearn in Python also includes other strategies like uniform, stratified etc. but this should be possible to create in R also.

# Create a simple dummy classifier that computes the ratio of the majority class to the totla
DummyClassifierAccuracy <- function(train,test,type="majority"){
if(type=="majority"){
count <- sum(train$target==1)/dim(train)[1] } count } cancer <- read.csv("cancer.csv") cancer$target <- as.factor(cancer$target) # Create training and test sets train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5) train <- cancer[train_idx, ] test <- cancer[-train_idx, ] #Dummy classifier majority class acc=DummyClassifierAccuracy(train,test) sprintf("Accuracy is %f",acc) ## [1] "Accuracy is 0.638498" ## 1.2b Dummy classifier – Python code This dummy classifier uses the majority class. import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.dummy import DummyClassifier from sklearn.metrics import confusion_matrix (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) # Negative class (0) is most frequent dummy_majority = DummyClassifier(strategy = 'most_frequent').fit(X_train, y_train) y_dummy_predictions = dummy_majority.predict(X_test) print('Dummy classifier accuracy on test set: {:.2f}' .format(dummy_majority.score(X_test, y_test)))  ## Dummy classifier accuracy on test set: 0.63 ## 1.3a – Radial SVM (un-normalized) – R code SVMs perform better when the data is normalized or scaled. The 2 examples below show that SVM with radial basis kernel does not perform any better than the dummy classifier library(dplyr) library(e1071) library(caret) library(reshape2) library(ggplot2) # Radial SVM unnormalized train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5) train <- cancer[train_idx, ] test <- cancer[-train_idx, ] # Unnormalized data svmfit=svm(target~., data=train, kernel="radial",cost=10,scale=FALSE) ypred=predict(svmfit,test) confusionMatrix(ypred,test$target)
## Confusion Matrix and Statistics
##
##           Reference
## Prediction  0  1
##          0  0  0
##          1 57 85
##
##                Accuracy : 0.5986
##                  95% CI : (0.5131, 0.6799)
##     No Information Rate : 0.5986
##     P-Value [Acc > NIR] : 0.5363
##
##                   Kappa : 0
##  Mcnemar's Test P-Value : 1.195e-13
##
##             Sensitivity : 0.0000
##             Specificity : 1.0000
##          Pos Pred Value :    NaN
##          Neg Pred Value : 0.5986
##              Prevalence : 0.4014
##          Detection Rate : 0.0000
##    Detection Prevalence : 0.0000
##       Balanced Accuracy : 0.5000
##
##        'Positive' Class : 0
## 

## 1.4b – Radial SVM (un-normalized) – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC

(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)

clf = SVC(C=10).fit(X_train, y_train)
print('Breast cancer dataset (unnormalized features)')
print('Accuracy of RBF-kernel SVC on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of RBF-kernel SVC on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
## Breast cancer dataset (unnormalized features)
## Accuracy of RBF-kernel SVC on training set: 1.00
## Accuracy of RBF-kernel SVC on test set: 0.63

## 1.5a – Radial SVM (Normalized) -R Code

The data is scaled (normalized ) before using the SVM model. The SVM model has 2 paramaters a) C – Large C (less regularization), more regularization b) gamma – Small gamma has larger decision boundary with more misclassfication, and larger gamma has tighter decision boundary

The R code below computes the accuracy as the regularization paramater is changed

trainingAccuracy <- NULL
testAccuracy <- NULL
C1 <- c(.01,.1, 1, 10, 20)
for(i in  C1){

ypredTrain <-predict(svmfit,train)
ypredTest=predict(svmfit,test)
a <-confusionMatrix(ypredTrain,train$target) b <-confusionMatrix(ypredTest,test$target)
trainingAccuracy <-c(trainingAccuracy,a$overall[1]) testAccuracy <-c(testAccuracy,b$overall[1])

}
print(trainingAccuracy)
##  Accuracy  Accuracy  Accuracy  Accuracy  Accuracy
## 0.6384977 0.9671362 0.9906103 0.9976526 1.0000000
print(testAccuracy)
##  Accuracy  Accuracy  Accuracy  Accuracy  Accuracy
## 0.5985915 0.9507042 0.9647887 0.9507042 0.9507042
a <-rbind(C1,as.numeric(trainingAccuracy),as.numeric(testAccuracy))
b <- data.frame(t(a))
names(b) <- c("C1","trainingAccuracy","testAccuracy")
df <- melt(b,id="C1")
ggplot(df) + geom_line(aes(x=C1, y=value, colour=variable),size=2) +
xlab("C (SVC regularization)value") + ylab("Accuracy") +
ggtitle("Training and test accuracy vs C(regularization)")

## 1.5b – Radial SVM (normalized) – Python

The Radial basis kernel is used on normalized data for a range of ‘C’ values and the result is plotted.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()

(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

print('Breast cancer dataset (normalized with MinMax scaling)')
trainingAccuracy=[]
testAccuracy=[]
for C1 in [.01,.1, 1, 10, 20]:
clf = SVC(C=C1).fit(X_train_scaled, y_train)
acctrain=clf.score(X_train_scaled, y_train)
accTest=clf.score(X_test_scaled, y_test)
trainingAccuracy.append(acctrain)
testAccuracy.append(accTest)

# Create a dataframe
C1=[.01,.1, 1, 10, 20]
trainingAccuracy=pd.DataFrame(trainingAccuracy,index=C1)
testAccuracy=pd.DataFrame(testAccuracy,index=C1)

# Plot training and test R squared as a function of alpha
df=pd.concat([trainingAccuracy,testAccuracy],axis=1)
df.columns=['trainingAccuracy','trainingAccuracy']

fig1=df.plot()
fig1=plt.title('Training and test accuracy vs C (SVC)')
fig1.figure.savefig('fig1.png', bbox_inches='tight')
## Breast cancer dataset (normalized with MinMax scaling)

Output image:

## 1.6a Validation curve – R code

Sklearn includes code creating validation curves by varying paramaters and computing and plotting accuracy as gamma or C or changd. I did not find this R but I think this is a useful function and so I have created the R equivalent of this.

# The R equivalent of np.logspace
seqLogSpace <- function(start,stop,len){
a=seq(log10(10^start),log10(10^stop),length=len)
10^a
}

# Read the data. This is taken the SKlearn cancer data
cancer$target <- as.factor(cancer$target)

set.seed(6)

# Create the range of C1 in log space
param_range = seqLogSpace(-3,2,20)
# Initialize the overall training and test accuracy to NULL
overallTrainAccuracy <- NULL
overallTestAccuracy <- NULL

# Loop over the parameter range of Gamma
for(i in param_range){
# Set no of folds
noFolds=5
# Create the rows which fall into different folds from 1..noFolds
folds = sample(1:noFolds, nrow(cancer), replace=TRUE)
# Initialize the training and test accuracy of folds to 0
trainingAccuracy <- 0
testAccuracy <- 0

# Loop through the folds
for(j in 1:noFolds){
# The training is all rows for which the row is != j (k-1 folds -> training)
train <- cancer[folds!=j,]
# The rows which have j as the index become the test set
test <- cancer[folds==j,]
# Create a SVM model for this

# Add up all the fold accuracy for training and test separately
ypredTrain <-predict(svmfit,train)
ypredTest=predict(svmfit,test)

# Create confusion matrix
a <-confusionMatrix(ypredTrain,train$target) b <-confusionMatrix(ypredTest,test$target)
# Get the accuracy
trainingAccuracy <-trainingAccuracy + a$overall[1] testAccuracy <-testAccuracy+b$overall[1]

}
# Compute the average of accuracy for K folds for number of features 'i'
overallTrainAccuracy=c(overallTrainAccuracy,trainingAccuracy/noFolds)
overallTestAccuracy=c(overallTestAccuracy,testAccuracy/noFolds)
}
#Create a dataframe
a <- rbind(param_range,as.numeric(overallTrainAccuracy),
as.numeric(overallTestAccuracy))
b <- data.frame(t(a))
names(b) <- c("C1","trainingAccuracy","testAccuracy")
df <- melt(b,id="C1")
#Plot in log axis
ggplot(df) + geom_line(aes(x=C1, y=value, colour=variable),size=2) +
xlab("C (SVC regularization)value") + ylab("Accuracy") +
ggtitle("Training and test accuracy vs C(regularization)") + scale_x_log10()

## 1.6b Validation curve – Python

Compute and plot the validation curve as gamma is varied.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import SVC
from sklearn.model_selection import validation_curve

(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X_cancer)

# Create a gamma values from 10^-3 to 10^2 with 20 equally spaced intervals
param_range = np.logspace(-3, 2, 20)
# Compute the validation curve
train_scores, test_scores = validation_curve(SVC(), X_scaled, y_cancer,
param_name='gamma',
param_range=param_range, cv=10)

#Plot the figure
fig2=plt.figure()

#Compute the mean
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)

fig2=plt.title('Validation Curve with SVM')
fig2=plt.xlabel('$\gamma$ (gamma)')
fig2=plt.ylabel('Score')
fig2=plt.ylim(0.0, 1.1)
lw = 2

fig2=plt.semilogx(param_range, train_scores_mean, label='Training score',
color='darkorange', lw=lw)

fig2=plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color='darkorange', lw=lw)

fig2=plt.semilogx(param_range, test_scores_mean, label='Cross-validation score',
color='navy', lw=lw)

fig2=plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color='navy', lw=lw)
fig2.figure.savefig('fig2.png', bbox_inches='tight')


Output image:

## 1.7a Validation Curve (Preventing data leakage) – Python code

In this course Applied Machine Learning in Python, the Professor states that when we apply the same data transformation to a entire dataset, it will cause a data leakage. “The proper way to do cross-validation when you need to scale the data is not to scale the entire dataset with a single transform, since this will indirectly leak information into the training data about the whole dataset, including the test data (see the lecture on data leakage later in the course). Instead, scaling/normalizing must be computed and applied for each cross-validation fold separately”

So I apply separate scaling to the training and testing folds and plot. In the lecture the Prof states that this can be done using pipelines.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.cross_validation import  KFold
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import SVC

(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
# Set the parameter range
param_range = np.logspace(-3, 2, 20)

# Set number of folds
folds=5
#Initialize
overallTrainAccuracy=[]
overallTestAccuracy=[]

# Loop over the paramater range
for c in  param_range:
trainingAccuracy=0
testAccuracy=0
kf = KFold(len(X_cancer),n_folds=folds)
# Partition into training and test folds
for train_index, test_index in kf:
# Partition the data acccording the fold indices generated
X_train, X_test = X_cancer[train_index], X_cancer[test_index]
y_train, y_test = y_cancer[train_index], y_cancer[test_index]

# Scale the X_train and X_test
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Fit a SVC model for each C
clf = SVC(C=c).fit(X_train_scaled, y_train)
#Compute the training and test score
acctrain=clf.score(X_train_scaled, y_train)
accTest=clf.score(X_test_scaled, y_test)
trainingAccuracy += np.sum(acctrain)
testAccuracy += np.sum(accTest)
# Compute the mean training and testing accuracy
overallTrainAccuracy.append(trainingAccuracy/folds)
overallTestAccuracy.append(testAccuracy/folds)

overallTrainAccuracy=pd.DataFrame(overallTrainAccuracy,index=param_range)
overallTestAccuracy=pd.DataFrame(overallTestAccuracy,index=param_range)

# Plot training and test R squared as a function of alpha
df=pd.concat([overallTrainAccuracy,overallTestAccuracy],axis=1)
df.columns=['trainingAccuracy','testAccuracy']

fig3=plt.title('Validation Curve with SVM')
fig3=plt.xlabel('$\gamma$ (gamma)')
fig3=plt.ylabel('Score')
fig3=plt.ylim(0.5, 1.1)
lw = 2

fig3=plt.semilogx(param_range, overallTrainAccuracy, label='Training score',
color='darkorange', lw=lw)

fig3=plt.semilogx(param_range, overallTestAccuracy, label='Cross-validation score',
color='navy', lw=lw)

fig3=plt.legend(loc='best')
fig3.figure.savefig('fig3.png', bbox_inches='tight')


Output image:

## 1.8 a Decision trees – R code

Decision trees in R can be plotted using RPart package

library(rpart)
library(rpart.plot)
rpart = NULL
# Create a decision tree
m <-rpart(Species~.,data=iris)
#Plot
rpart.plot(m,extra=2,main="Decision Tree - IRIS")

## 1.8 b Decision trees – Python code

from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.model_selection import train_test_split
import graphviz

X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = 3)
clf = DecisionTreeClassifier().fit(X_train, y_train)

print('Accuracy of Decision Tree classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))

dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
## Accuracy of Decision Tree classifier on training set: 1.00
## Accuracy of Decision Tree classifier on test set: 0.97

## 1.9a Feature importance – R code

I found the following code which had a snippet for feature importance. Sklean has a nice method for this. For some reason the results in R and Python are different. Any thoughts?

set.seed(3)
library(mlbench)
library(caret)
cancer$target <- as.factor(cancer$target)
# Split as data
data <- cancer[,1:31]
target <- cancer[,32]

# Train the model
model <- train(data, target, method="rf", preProcess="scale", trControl=trainControl(method = "cv"))
# Compute variable importance
importance <- varImp(model)
# summarize importance
print(importance)
# plot importance
plot(importance)

## 1.9b Feature importance – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
import numpy as np
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0)
# Use the DecisionTreClassifier
clf = DecisionTreeClassifier(max_depth = 4, min_samples_leaf = 8,
random_state = 0).fit(X_train, y_train)

c_features=len(cancer.feature_names)
print('Breast cancer dataset: decision tree')
print('Accuracy of DT classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of DT classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))

# Plot the feature importances
fig4=plt.figure(figsize=(10,6),dpi=80)

fig4=plt.barh(range(c_features), clf.feature_importances_)
fig4=plt.xlabel("Feature importance")
fig4=plt.ylabel("Feature name")
fig4=plt.yticks(np.arange(c_features), cancer.feature_names)
fig4=plt.tight_layout()
plt.savefig('fig4.png', bbox_inches='tight')

## Breast cancer dataset: decision tree
## Accuracy of DT classifier on training set: 0.96
## Accuracy of DT classifier on test set: 0.94

Output image:

## 1.10a Precision-Recall, ROC curves & AUC- R code

I tried several R packages for plotting the Precision and Recall and AUC curve. PRROC seems to work well. The Precision-Recall curves show the tradeoff between precision and recall. The higher the precision, the lower the recall and vice versa.AUC curves that hug the top left corner indicate a high sensitivity,specificity and an excellent accuracy.

source("RFunctions-1.R")
library(dplyr)
library(caret)
library(e1071)
library(PRROC)
# Read the data (this data is from sklearn!)
digits <- d[2:66]
digits$X64 <- as.factor(digits$X64)

# Split as training and test sets
train_idx <- trainTestSplit(digits,trainPercent=75,seed=5)
train <- digits[train_idx, ]
test <- digits[-train_idx, ]

# Fit a SVM model with linear basis kernel with probabilities
svmfit=svm(X64~., data=train, kernel="linear",scale=FALSE,probability=TRUE)
ypred=predict(svmfit,test,probability=TRUE)
head(attr(ypred,"probabilities"))
##               0            1
## 6  7.395947e-01 2.604053e-01
## 8  9.999998e-01 1.842555e-07
## 12 1.655178e-05 9.999834e-01
## 13 9.649997e-01 3.500032e-02
## 15 9.994849e-01 5.150612e-04
## 16 9.999987e-01 1.280700e-06
# Store the probability of 0s and 1s
m0<-attr(ypred,"probabilities")[,1]
m1<-attr(ypred,"probabilities")[,2]

# Create a dataframe of scores
scores <- data.frame(m1,test$X64) # Class 0 is data points of +ve class (in this case, digit 1) and -ve class (digit 0) #Compute Precision Recall pr <- pr.curve(scores.class0=scores[scores$test.X64=="1",]$m1, scores.class1=scores[scores$test.X64=="0",]$m1, curve=T) # Plot precision-recall curve plot(pr) #Plot the ROC curve roc<-roc.curve(m0, m1,curve=TRUE) plot(roc) ## 1.10b Precision-Recall, ROC curves & AUC- Python code For Python Logistic Regression is used to plot Precision Recall, ROC curve and compute AUC import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.datasets import load_digits from sklearn.metrics import precision_recall_curve from sklearn.metrics import roc_curve, auc #Load the digits dataset = load_digits() X, y = dataset.data, dataset.target #Create 2 classes -i) Digit 1 (from digit 1) ii) Digit 0 (from all other digits) # Make a copy of the target z= y.copy() # Replace all non 1's as 0 z[z != 1] = 0 X_train, X_test, y_train, y_test = train_test_split(X, z, random_state=0) # Fit a LR model lr = LogisticRegression().fit(X_train, y_train) #Compute the decision scores y_scores_lr = lr.fit(X_train, y_train).decision_function(X_test) y_score_list = list(zip(y_test[0:20], y_scores_lr[0:20])) #Show the decision_function scores for first 20 instances y_score_list precision, recall, thresholds = precision_recall_curve(y_test, y_scores_lr) closest_zero = np.argmin(np.abs(thresholds)) closest_zero_p = precision[closest_zero] closest_zero_r = recall[closest_zero] #Plot plt.figure() plt.xlim([0.0, 1.01]) plt.ylim([0.0, 1.01]) plt.plot(precision, recall, label='Precision-Recall Curve') plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3) plt.xlabel('Precision', fontsize=16) plt.ylabel('Recall', fontsize=16) plt.axes().set_aspect('equal') plt.savefig('fig5.png', bbox_inches='tight') #Compute and plot the ROC y_score_lr = lr.fit(X_train, y_train).decision_function(X_test) fpr_lr, tpr_lr, _ = roc_curve(y_test, y_score_lr) roc_auc_lr = auc(fpr_lr, tpr_lr) plt.figure() plt.xlim([-0.01, 1.00]) plt.ylim([-0.01, 1.01]) plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr)) plt.xlabel('False Positive Rate', fontsize=16) plt.ylabel('True Positive Rate', fontsize=16) plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16) plt.legend(loc='lower right', fontsize=13) plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--') plt.axes() plt.savefig('fig6.png', bbox_inches='tight')  output ## 1.10c Precision-Recall, ROC curves & AUC- Python code In the code below classification probabilities are used to compute and plot precision-recall, roc and AUC import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.datasets import load_digits from sklearn.svm import LinearSVC from sklearn.calibration import CalibratedClassifierCV dataset = load_digits() X, y = dataset.data, dataset.target # Make a copy of the target z= y.copy() # Replace all non 1's as 0 z[z != 1] = 0 X_train, X_test, y_train, y_test = train_test_split(X, z, random_state=0) svm = LinearSVC() # Need to use CalibratedClassifierSVC to redict probabilities for lInearSVC clf = CalibratedClassifierCV(svm) clf.fit(X_train, y_train) y_proba_lr = clf.predict_proba(X_test) from sklearn.metrics import precision_recall_curve precision, recall, thresholds = precision_recall_curve(y_test, y_proba_lr[:,1]) closest_zero = np.argmin(np.abs(thresholds)) closest_zero_p = precision[closest_zero] closest_zero_r = recall[closest_zero] #plt.figure(figsize=(15,15),dpi=80) plt.figure() plt.xlim([0.0, 1.01]) plt.ylim([0.0, 1.01]) plt.plot(precision, recall, label='Precision-Recall Curve') plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3) plt.xlabel('Precision', fontsize=16) plt.ylabel('Recall', fontsize=16) plt.axes().set_aspect('equal') plt.savefig('fig7.png', bbox_inches='tight') # Practical Machine Learning with R and Python – Part 3 In this post ‘Practical Machine Learning with R and Python – Part 3’, I discuss ‘Feature Selection’ methods. This post is a continuation of my 2 earlier posts While applying Machine Learning techniques, the data set will usually include a large number of predictors for a target variable. It is quite likely, that not all the predictors or feature variables will have an impact on the output. Hence it is becomes necessary to choose only those features which influence the output variable thus simplifying to a reduced feature set on which to train the ML model on. The techniques that are used are the following • Best fit • Forward fit • Backward fit • Ridge Regression or L2 regularization • Lasso or L1 regularization This post includes the equivalent ML code in R and Python. All these methods remove those features which do not sufficiently influence the output. As in my previous 2 posts on “Practical Machine Learning with R and Python’, this post is largely based on the topics in the following 2 MOOC courses 1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford 2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera You can download this R Markdown file and the associated data from Github – Machine Learning-RandPython-Part3. The content of this post and much more is now available as a compact book on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo # 1.1 Best Fit For a dataset with features f1,f2,f3…fn, the ‘Best fit’ approach, chooses all possible combinations of features and creates separate ML models for each of the different combinations. The best fit algotithm then uses some filtering criteria based on Adj Rsquared, Cp, BIC or AIC to pick out the best model among all models. Since the Best Fit approach searches the entire solution space it is computationally infeasible. The number of models that have to be searched increase exponentially as the number of predictors increase. For ‘p’ predictors a total of $2^{p}$ ML models have to be searched. This can be shown as follows There are $C_{1}$ ways to choose single feature ML models among ‘n’ features, $C_{2}$ ways to choose 2 feature models among ‘n’ models and so on, or $1+C_{1} + C_{2} +... + C_{n}$ = Total number of models in Best Fit. Since from Binomial theorem we have $(1+x)^{n} = 1+C_{1}x + C_{2}x^{2} +... + C_{n}x^{n}$ When x=1 in the equation (1) above, this becomes $2^{n} = 1+C_{1} + C_{2} +... + C_{n}$ Hence there are $2^{n}$ models to search amongst in Best Fit. For 10 features this is $2^{10}$ or ~1000 models and for 40 features this becomes $2^{40}$ which almost 1 trillion. Usually there are datasets with 1000 or maybe even 100000 features and Best fit becomes computationally infeasible. Anyways I have included the Best Fit approach as I use the Boston crime datasets which is available both the MASS package in R and Sklearn in Python and it has 13 features. Even this small feature set takes a bit of time since the Best fit needs to search among ~$2^{13}= 8192$ models Initially I perform a simple Linear Regression Fit to estimate the features that are statistically insignificant. By looking at the p-values of the features it can be seen that ‘indus’ and ‘age’ features have high p-values and are not significant # 1.1a Linear Regression – R code source('RFunctions-1.R') #Read the Boston crime data df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL # Rename the columns names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") # Select specific columns df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") dim(df1) ## [1] 506 14 # Linear Regression fit fit <- lm(cost~. ,data=df1) summary(fit) ## ## Call: ## lm(formula = cost ~ ., data = df1) ## ## Residuals: ## Min 1Q Median 3Q Max ## -15.595 -2.730 -0.518 1.777 26.199 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.646e+01 5.103e+00 7.144 3.28e-12 *** ## crimeRate -1.080e-01 3.286e-02 -3.287 0.001087 ** ## zone 4.642e-02 1.373e-02 3.382 0.000778 *** ## indus 2.056e-02 6.150e-02 0.334 0.738288 ## charles 2.687e+00 8.616e-01 3.118 0.001925 ** ## nox -1.777e+01 3.820e+00 -4.651 4.25e-06 *** ## rooms 3.810e+00 4.179e-01 9.116 < 2e-16 *** ## age 6.922e-04 1.321e-02 0.052 0.958229 ## distances -1.476e+00 1.995e-01 -7.398 6.01e-13 *** ## highways 3.060e-01 6.635e-02 4.613 5.07e-06 *** ## tax -1.233e-02 3.760e-03 -3.280 0.001112 ** ## teacherRatio -9.527e-01 1.308e-01 -7.283 1.31e-12 *** ## color 9.312e-03 2.686e-03 3.467 0.000573 *** ## status -5.248e-01 5.072e-02 -10.347 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4.745 on 492 degrees of freedom ## Multiple R-squared: 0.7406, Adjusted R-squared: 0.7338 ## F-statistic: 108.1 on 13 and 492 DF, p-value: < 2.2e-16 Next we apply the different feature selection models to automatically remove features that are not significant below # 1.1a Best Fit – R code The Best Fit requires the ‘leaps’ R package library(leaps) source('RFunctions-1.R') #Read the Boston crime data df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL # Rename the columns names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") # Select specific columns df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") # Perform a best fit bestFit=regsubsets(cost~.,df1,nvmax=13) # Generate a summary of the fit bfSummary=summary(bestFit) # Plot the Residual Sum of Squares vs number of variables plot(bfSummary$rss,xlab="Number of Variables",ylab="RSS",type="l",main="Best fit RSS vs No of features")
# Get the index of the minimum value
a=which.min(bfSummary$rss) # Mark this in red points(a,bfSummary$rss[a],col="red",cex=2,pch=20)

The plot below shows that the Best fit occurs with all 13 features included. Notice that there is no significant change in RSS from 11 features onward.

# Plot the CP statistic vs Number of variables
plot(bfSummary$cp,xlab="Number of Variables",ylab="Cp",type='l',main="Best fit Cp vs No of features") # Find the lowest CP value b=which.min(bfSummary$cp)
# Mark this in red
points(b,bfSummary$cp[b],col="red",cex=2,pch=20) Based on Cp metric the best fit occurs at 11 features as seen below. The values of the coefficients are also included below # Display the set of features which provide the best fit coef(bestFit,b) ## (Intercept) crimeRate zone charles nox ## 36.341145004 -0.108413345 0.045844929 2.718716303 -17.376023429 ## rooms distances highways tax teacherRatio ## 3.801578840 -1.492711460 0.299608454 -0.011777973 -0.946524570 ## color status ## 0.009290845 -0.522553457 # Plot the BIC value plot(bfSummary$bic,xlab="Number of Variables",ylab="BIC",type='l',main="Best fit BIC vs No of Features")
# Find and mark the min value
c=which.min(bfSummary$bic) points(c,bfSummary$bic[c],col="red",cex=2,pch=20)

# R has some other good plots for best fit
plot(bestFit,scale="r2",main="Rsquared vs No Features")

R has the following set of really nice visualizations. The plot below shows the Rsquared for a set of predictor variables. It can be seen when Rsquared starts at 0.74- indus, charles and age have not been included.

plot(bestFit,scale="Cp",main="Cp vs NoFeatures")

The Cp plot below for value shows indus, charles and age as not included in the Best fit

plot(bestFit,scale="bic",main="BIC vs Features")

## 1.1b Best fit (Exhaustive Search ) – Python code

The Python package for performing a Best Fit is the Exhaustive Feature Selector EFS.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from mlxtend.feature_selection import ExhaustiveFeatureSelector as EFS

# Read the Boston crime data

#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
"distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
# Set X and y
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
"distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']

# Perform an Exhaustive Search. The EFS and SFS packages use 'neg_mean_squared_error'. The 'mean_squared_error' seems to have been deprecated. I think this is just the MSE with the a negative sign.
lr = LinearRegression()
efs1 = EFS(lr,
min_features=1,
max_features=13,
scoring='neg_mean_squared_error',
print_progress=True,
cv=5)

# Create a efs fit
efs1 = efs1.fit(X.as_matrix(), y.as_matrix())

print('Best negtive mean squared error: %.2f' % efs1.best_score_)
## Print the IDX of the best features
print('Best subset:', efs1.best_idx_)

Features: 8191/8191Best negtive mean squared error: -28.92
## ('Best subset:', (0, 1, 4, 6, 7, 8, 9, 10, 11, 12))

The indices for the best subset are shown above.

# 1.2 Forward fit

Forward fit is a greedy algorithm that tries to optimize the feature selected, by minimizing the selection criteria (adj Rsqaured, Cp, AIC or BIC) at every step. For a dataset with features f1,f2,f3…fn, the forward fit starts with the NULL set. It then pick the ML model with a single feature from n features which has the highest adj Rsquared, or minimum Cp, BIC or some such criteria. After picking the 1 feature from n which satisfies the criteria the most, the next feature from the remaining n-1 features is chosen. When the 2 feature model which satisfies the selection criteria the best is chosen, another feature from the remaining n-2 features are added and so on. The forward fit is a sub-optimal algorithm. There is no guarantee that the final list of features chosen will be the best among the lot. The computation required for this is of  $n + n-1 + n -2 + .. 1 = n(n+1)/2$ which is of the order of $n^{2}$. Though forward fit is a sub optimal solution it is far more computationally efficient than best fit

## 1.2a Forward fit – R code

Forward fit in R determines that 11 features are required for the best fit. The features are shown below

library(leaps)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
# Rename the columns
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
"distances","highways","tax","teacherRatio","color","status","cost")

# Select columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
"distances","highways","tax","teacherRatio","color","status","cost")

#Split as training and test
train_idx <- trainTestSplit(df1,trainPercent=75,seed=5)
train <- df1[train_idx, ]
test <- df1[-train_idx, ]

# Find the best forward fit
fitFwd=regsubsets(cost~.,data=train,nvmax=13,method="forward")

# Compute the MSE
valErrors=rep(NA,13)
test.mat=model.matrix(cost~.,data=test)
for(i in 1:13){
coefi=coef(fitFwd,id=i)
pred=test.mat[,names(coefi)]%*%coefi
valErrors[i]=mean((test$cost-pred)^2) } # Plot the Residual Sum of Squares plot(valErrors,xlab="Number of Variables",ylab="Validation Error",type="l",main="Forward fit RSS vs No of features") # Gives the index of the minimum value a<-which.min(valErrors) print(a) ## [1] 11 # Highlight the smallest value points(c,valErrors[a],col="blue",cex=2,pch=20) Forward fit R selects 11 predictors as the best ML model to predict the ‘cost’ output variable. The values for these 11 predictors are included below #Print the 11 ccoefficients coefi=coef(fitFwd,id=i) coefi ## (Intercept) crimeRate zone indus charles ## 2.397179e+01 -1.026463e-01 3.118923e-02 1.154235e-04 3.512922e+00 ## nox rooms age distances highways ## -1.511123e+01 4.945078e+00 -1.513220e-02 -1.307017e+00 2.712534e-01 ## tax teacherRatio color status ## -1.330709e-02 -8.182683e-01 1.143835e-02 -3.750928e-01 ## 1.2b Forward fit with Cross Validation – R code The Python package SFS includes N Fold Cross Validation errors for forward and backward fit so I decided to add this code to R. This is not available in the ‘leaps’ R package, however the implementation is quite simple. Another implementation is also available at Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford 2. library(dplyr) df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") # Select columns df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") set.seed(6) # Set max number of features nvmax<-13 cvError <- NULL # Loop through each features for(i in 1:nvmax){ # Set no of folds noFolds=5 # Create the rows which fall into different folds from 1..noFolds folds = sample(1:noFolds, nrow(df1), replace=TRUE) cv<-0 # Loop through the folds for(j in 1:noFolds){ # The training is all rows for which the row is != j (k-1 folds -> training) train <- df1[folds!=j,] # The rows which have j as the index become the test set test <- df1[folds==j,] # Create a forward fitting model for this fitFwd=regsubsets(cost~.,data=train,nvmax=13,method="forward") # Select the number of features and get the feature coefficients coefi=coef(fitFwd,id=i) #Get the value of the test data test.mat=model.matrix(cost~.,data=test) # Multiply the tes data with teh fitted coefficients to get the predicted value # pred = b0 + b1x1+b2x2... b13x13 pred=test.mat[,names(coefi)]%*%coefi # Compute mean squared error rss=mean((test$cost - pred)^2)
# Add all the Cross Validation errors
}
# Compute the average of MSE for K folds for number of features 'i'
cvError[i]=cv/noFolds
}
a <- seq(1,13)
d <- as.data.frame(t(rbind(a,cvError)))
names(d) <- c("Features","CVError")
#Plot the CV Error vs No of Features
ggplot(d,aes(x=Features,y=CVError),color="blue") + geom_point() + geom_line(color="blue") +
xlab("No of features") + ylab("Cross Validation Error") +
ggtitle("Forward Selection - Cross Valdation Error vs No of Features")

Forward fit with 5 fold cross validation indicates that all 13 features are required

# This gives the index of the minimum value
a=which.min(cvError)
print(a)
## [1] 13
#Print the 13 coefficients of these features
coefi=coef(fitFwd,id=a)
coefi
##   (Intercept)     crimeRate          zone         indus       charles
##  36.650645380  -0.107980979   0.056237669   0.027016678   4.270631466
##           nox         rooms           age     distances      highways
## -19.000715500   3.714720418   0.019952654  -1.472533973   0.326758004
##           tax  teacherRatio         color        status
##  -0.011380750  -0.972862622   0.009549938  -0.582159093

## 1.2c Forward fit – Python code

The Backward Fit in Python uses the Sequential feature selection (SFS) package (SFS)(https://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/)

Note: The Cross validation error for SFS in Sklearn is negative, possibly because it computes the ‘neg_mean_squared_error’. The earlier ‘mean_squared_error’ in the package seems to have been deprecated. I have taken the -ve of this neg_mean_squared_error. I think this would give mean_squared_error.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib.pyplot as plt
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from sklearn.linear_model import LinearRegression

#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
"distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
"distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
lr = LinearRegression()
# Create a forward fit model
sfs = SFS(lr,
k_features=(1,13),
forward=True, # Forward fit
floating=False,
scoring='neg_mean_squared_error',
cv=5)

# Fit this on the data
sfs = sfs.fit(X.as_matrix(), y.as_matrix())
# Get all the details of the forward fits
a=sfs.get_metric_dict()
n=[]
o=[]

# Compute the mean cross validation scores
for i in np.arange(1,13):
n.append(-np.mean(a[i]['cv_scores']))
m=np.arange(1,13)
# Get the index of the minimum CV score

# Plot the CV scores vs the number of features
fig1=plt.plot(m,n)
fig1=plt.title('Mean CV Scores vs No of features')
fig1.figure.savefig('fig1.png', bbox_inches='tight')

print(pd.DataFrame.from_dict(sfs.get_metric_dict(confidence_interval=0.90)).T)

idx = np.argmin(n)
print "No of features=",idx
#Get the features indices for the best forward fit and convert to list
b=list(a[idx]['feature_idx'])
print(b)

# Index the column names.
# Features from forward fit
print("Features selected in forward fit")
print(X.columns[b])
##    avg_score ci_bound                                          cv_scores  \
## 1   -42.6185  19.0465  [-23.5582499971, -41.8215743748, -73.993608929...
## 2   -36.0651  16.3184  [-18.002498199, -40.1507894517, -56.5286659068...
## 3   -34.1001    20.87  [-9.43012884381, -25.9584955394, -36.184188174...
## 4   -33.7681  20.1638  [-8.86076528781, -28.650217633, -35.7246353855...
## 5   -33.6392  20.5271  [-8.90807628524, -28.0684679108, -35.827463022...
## 6   -33.6276  19.0859  [-9.549485942, -30.9724602876, -32.6689523347,...
## 7   -32.4082  19.1455  [-10.0177149635, -28.3780298492, -30.926917231...
## 8   -32.3697   18.533  [-11.1431684243, -27.5765510172, -31.168994094...
## 9   -32.4016  21.5561  [-10.8972555995, -25.739780653, -30.1837430353...
## 10  -32.8504  22.6508  [-12.3909282079, -22.1533250755, -33.385407342...
## 11  -34.1065  24.7019  [-12.6429253721, -22.1676650245, -33.956999528...
## 12  -35.5814   25.693  [-12.7303397453, -25.0145323483, -34.211898373...
## 13  -37.1318  23.2657  [-12.4603005692, -26.0486211062, -33.074137979...
##
##                                    feature_idx  std_dev  std_err
## 1                                        (12,)  18.9042  9.45212
## 2                                     (10, 12)  16.1965  8.09826
## 3                                  (10, 12, 5)  20.7142  10.3571
## 4                               (10, 3, 12, 5)  20.0132  10.0066
## 5                            (0, 10, 3, 12, 5)  20.3738  10.1869
## 6                         (0, 3, 5, 7, 10, 12)  18.9433  9.47167
## 7                      (0, 2, 3, 5, 7, 10, 12)  19.0026  9.50128
## 8                   (0, 1, 2, 3, 5, 7, 10, 12)  18.3946  9.19731
## 9               (0, 1, 2, 3, 5, 7, 10, 11, 12)  21.3952  10.6976
## 10           (0, 1, 2, 3, 4, 5, 7, 10, 11, 12)  22.4816  11.2408
## 11        (0, 1, 2, 3, 4, 5, 6, 7, 10, 11, 12)  24.5175  12.2587
## 12     (0, 1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12)  25.5012  12.7506
## 13  (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)  23.0919   11.546
## No of features= 7
## [0, 2, 3, 5, 7, 10, 12]
## #################################################################################
## Features selected in forward fit
## Index([u'crimeRate', u'indus', u'chasRiver', u'rooms', u'distances',
##        u'teacherRatio', u'status'],
##       dtype='object')

## 1.3 Backward Fit

Backward fit belongs to the class of greedy algorithms which tries to optimize the feature set, by dropping a feature at every stage which results in the worst performance for a given criteria of Adj RSquared, Cp, BIC or AIC. For a dataset with features f1,f2,f3…fn, the backward fit starts with the all the features f1,f2.. fn to begin with. It then pick the ML model with a n-1 features by dropping the feature,$f_{j}$, for e.g., the inclusion of which results in the worst performance in adj Rsquared, or minimum Cp, BIC or some such criteria. At every step 1 feature is dopped. There is no guarantee that the final list of features chosen will be the best among the lot. The computation required for this is of $n + n-1 + n -2 + .. 1 = n(n+1)/2$ which is of the order of $n^{2}$. Though backward fit is a sub optimal solution it is far more computationally efficient than best fit

## 1.3a Backward fit – R code

library(dplyr)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
# Rename the columns
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
"distances","highways","tax","teacherRatio","color","status","cost")

# Select columns
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
"distances","highways","tax","teacherRatio","color","status","cost")

set.seed(6)
# Set max number of features
nvmax<-13
cvError <- NULL
# Loop through each features
for(i in 1:nvmax){
# Set no of folds
noFolds=5
# Create the rows which fall into different folds from 1..noFolds
folds = sample(1:noFolds, nrow(df1), replace=TRUE)
cv<-0
for(j in 1:noFolds){
# The training is all rows for which the row is != j
train <- df1[folds!=j,]
# The rows which have j as the index become the test set
test <- df1[folds==j,]
# Create a backward fitting model for this
fitFwd=regsubsets(cost~.,data=train,nvmax=13,method="backward")
# Select the number of features and get the feature coefficients
coefi=coef(fitFwd,id=i)
#Get the value of the test data
test.mat=model.matrix(cost~.,data=test)
# Multiply the tes data with teh fitted coefficients to get the predicted value
# pred = b0 + b1x1+b2x2... b13x13
pred=test.mat[,names(coefi)]%*%coefi
# Compute mean squared error
rss=mean((test$cost - pred)^2) # Add the Residual sum of square cv=cv+rss } # Compute the average of MSE for K folds for number of features 'i' cvError[i]=cv/noFolds } a <- seq(1,13) d <- as.data.frame(t(rbind(a,cvError))) names(d) <- c("Features","CVError") # Plot the Cross Validation Error vs Number of features ggplot(d,aes(x=Features,y=CVError),color="blue") + geom_point() + geom_line(color="blue") + xlab("No of features") + ylab("Cross Validation Error") + ggtitle("Backward Selection - Cross Valdation Error vs No of Features") # This gives the index of the minimum value a=which.min(cvError) print(a) ## [1] 13 #Print the 13 coefficients of these features coefi=coef(fitFwd,id=a) coefi ## (Intercept) crimeRate zone indus charles ## 36.650645380 -0.107980979 0.056237669 0.027016678 4.270631466 ## nox rooms age distances highways ## -19.000715500 3.714720418 0.019952654 -1.472533973 0.326758004 ## tax teacherRatio color status ## -0.011380750 -0.972862622 0.009549938 -0.582159093 Backward selection in R also indicates the 13 features and the corresponding coefficients as providing the best fit ## 1.3b Backward fit – Python code The Backward Fit in Python uses the Sequential feature selection (SFS) package (SFS)(https://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs import matplotlib.pyplot as plt from mlxtend.feature_selection import SequentialFeatureSelector as SFS from sklearn.linear_model import LinearRegression # Read the data df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") #Rename the columns df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status","cost"] X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status"]] y=df['cost'] lr = LinearRegression() # Create the SFS model sfs = SFS(lr, k_features=(1,13), forward=False, # Backward floating=False, scoring='neg_mean_squared_error', cv=5) # Fit the model sfs = sfs.fit(X.as_matrix(), y.as_matrix()) a=sfs.get_metric_dict() n=[] o=[] # Compute the mean of the validation scores for i in np.arange(1,13): n.append(-np.mean(a[i]['cv_scores'])) m=np.arange(1,13) # Plot the Validation scores vs number of features fig2=plt.plot(m,n) fig2=plt.title('Mean CV Scores vs No of features') fig2.figure.savefig('fig2.png', bbox_inches='tight') print(pd.DataFrame.from_dict(sfs.get_metric_dict(confidence_interval=0.90)).T) # Get the index of minimum cross validation error idx = np.argmin(n) print "No of features=",idx #Get the features indices for the best forward fit and convert to list b=list(a[idx]['feature_idx']) # Index the column names. # Features from backward fit print("Features selected in bacward fit") print(X.columns[b])  ## avg_score ci_bound cv_scores \ ## 1 -42.6185 19.0465 [-23.5582499971, -41.8215743748, -73.993608929... ## 2 -36.0651 16.3184 [-18.002498199, -40.1507894517, -56.5286659068... ## 3 -35.4992 13.9619 [-17.2329292677, -44.4178648308, -51.633177846... ## 4 -33.463 12.4081 [-20.6415333292, -37.3247852146, -47.479302977... ## 5 -33.1038 10.6156 [-20.2872309863, -34.6367078466, -45.931870352... ## 6 -32.0638 10.0933 [-19.4463829372, -33.460638577, -42.726257249,... ## 7 -30.7133 9.23881 [-19.4425181917, -31.1742902259, -40.531266671... ## 8 -29.7432 9.84468 [-19.445277268, -30.0641187173, -40.2561247122... ## 9 -29.0878 9.45027 [-19.3545569877, -30.094768669, -39.7506036377... ## 10 -28.9225 9.39697 [-18.562171585, -29.968504938, -39.9586835965,... ## 11 -29.4301 10.8831 [-18.3346152225, -30.3312847532, -45.065432793... ## 12 -30.4589 11.1486 [-18.493389527, -35.0290639374, -45.1558231765... ## 13 -37.1318 23.2657 [-12.4603005692, -26.0486211062, -33.074137979... ## ## feature_idx std_dev std_err ## 1 (12,) 18.9042 9.45212 ## 2 (10, 12) 16.1965 8.09826 ## 3 (10, 12, 7) 13.8576 6.92881 ## 4 (12, 10, 4, 7) 12.3154 6.15772 ## 5 (4, 7, 8, 10, 12) 10.5363 5.26816 ## 6 (4, 7, 8, 9, 10, 12) 10.0179 5.00896 ## 7 (1, 4, 7, 8, 9, 10, 12) 9.16981 4.58491 ## 8 (1, 4, 7, 8, 9, 10, 11, 12) 9.77116 4.88558 ## 9 (0, 1, 4, 7, 8, 9, 10, 11, 12) 9.37969 4.68985 ## 10 (0, 1, 4, 6, 7, 8, 9, 10, 11, 12) 9.3268 4.6634 ## 11 (0, 1, 3, 4, 6, 7, 8, 9, 10, 11, 12) 10.8018 5.40092 ## 12 (0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12) 11.0653 5.53265 ## 13 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12) 23.0919 11.546 ## No of features= 9 ## Features selected in bacward fit ## Index([u'crimeRate', u'zone', u'NO2', u'distances', u'idxHighways', u'taxRate', ## u'teacherRatio', u'color', u'status'], ## dtype='object') ## 1.3c Sequential Floating Forward Selection (SFFS) – Python code The Sequential Feature search also includes ‘floating’ variants which include or exclude features conditionally, once they were excluded or included. The SFFS can conditionally include features which were excluded from the previous step, if it results in a better fit. This option will tend to a better solution, than plain simple SFS. These variants are included below import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.datasets import load_boston from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs import matplotlib.pyplot as plt from mlxtend.feature_selection import SequentialFeatureSelector as SFS from sklearn.linear_model import LinearRegression df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") #Rename the columns df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status","cost"] X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status"]] y=df['cost'] lr = LinearRegression() # Create the floating forward search sffs = SFS(lr, k_features=(1,13), forward=True, # Forward floating=True, #Floating scoring='neg_mean_squared_error', cv=5) # Fit a model sffs = sffs.fit(X.as_matrix(), y.as_matrix()) a=sffs.get_metric_dict() n=[] o=[] # Compute mean validation scores for i in np.arange(1,13): n.append(-np.mean(a[i]['cv_scores'])) m=np.arange(1,13) # Plot the cross validation score vs number of features fig3=plt.plot(m,n) fig3=plt.title('SFFS:Mean CV Scores vs No of features') fig3.figure.savefig('fig3.png', bbox_inches='tight') print(pd.DataFrame.from_dict(sffs.get_metric_dict(confidence_interval=0.90)).T) # Get the index of the minimum CV score idx = np.argmin(n) print "No of features=",idx #Get the features indices for the best forward floating fit and convert to list b=list(a[idx]['feature_idx']) print(b) print("#################################################################################") # Index the column names. # Features from forward fit print("Features selected in forward fit") print(X.columns[b]) ## avg_score ci_bound cv_scores \ ## 1 -42.6185 19.0465 [-23.5582499971, -41.8215743748, -73.993608929... ## 2 -36.0651 16.3184 [-18.002498199, -40.1507894517, -56.5286659068... ## 3 -34.1001 20.87 [-9.43012884381, -25.9584955394, -36.184188174... ## 4 -33.7681 20.1638 [-8.86076528781, -28.650217633, -35.7246353855... ## 5 -33.6392 20.5271 [-8.90807628524, -28.0684679108, -35.827463022... ## 6 -33.6276 19.0859 [-9.549485942, -30.9724602876, -32.6689523347,... ## 7 -32.1834 12.1001 [-17.9491036167, -39.6479234651, -45.470227740... ## 8 -32.0908 11.8179 [-17.4389015788, -41.2453629843, -44.247557798... ## 9 -31.0671 10.1581 [-17.2689542913, -37.4379370429, -41.366372300... ## 10 -28.9225 9.39697 [-18.562171585, -29.968504938, -39.9586835965,... ## 11 -29.4301 10.8831 [-18.3346152225, -30.3312847532, -45.065432793... ## 12 -30.4589 11.1486 [-18.493389527, -35.0290639374, -45.1558231765... ## 13 -37.1318 23.2657 [-12.4603005692, -26.0486211062, -33.074137979... ## ## feature_idx std_dev std_err ## 1 (12,) 18.9042 9.45212 ## 2 (10, 12) 16.1965 8.09826 ## 3 (10, 12, 5) 20.7142 10.3571 ## 4 (10, 3, 12, 5) 20.0132 10.0066 ## 5 (0, 10, 3, 12, 5) 20.3738 10.1869 ## 6 (0, 3, 5, 7, 10, 12) 18.9433 9.47167 ## 7 (0, 1, 2, 3, 7, 10, 12) 12.0097 6.00487 ## 8 (0, 1, 2, 3, 7, 8, 10, 12) 11.7297 5.86484 ## 9 (0, 1, 2, 3, 7, 8, 9, 10, 12) 10.0822 5.04111 ## 10 (0, 1, 4, 6, 7, 8, 9, 10, 11, 12) 9.3268 4.6634 ## 11 (0, 1, 3, 4, 6, 7, 8, 9, 10, 11, 12) 10.8018 5.40092 ## 12 (0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12) 11.0653 5.53265 ## 13 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12) 23.0919 11.546 ## No of features= 9 ## [0, 1, 2, 3, 7, 8, 9, 10, 12] ## ################################################################################# ## Features selected in forward fit ## Index([u'crimeRate', u'zone', u'indus', u'chasRiver', u'distances', ## u'idxHighways', u'taxRate', u'teacherRatio', u'status'], ## dtype='object') ## 1.3d Sequential Floating Backward Selection (SFBS) – Python code The SFBS is an extension of the SBS. Here features that are excluded at any stage can be conditionally included if the resulting feature set gives a better fit. import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.datasets import load_boston from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs import matplotlib.pyplot as plt from mlxtend.feature_selection import SequentialFeatureSelector as SFS from sklearn.linear_model import LinearRegression df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") #Rename the columns df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status","cost"] X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status"]] y=df['cost'] lr = LinearRegression() sffs = SFS(lr, k_features=(1,13), forward=False, # Backward floating=True, # Floating scoring='neg_mean_squared_error', cv=5) sffs = sffs.fit(X.as_matrix(), y.as_matrix()) a=sffs.get_metric_dict() n=[] o=[] # Compute the mean cross validation score for i in np.arange(1,13): n.append(-np.mean(a[i]['cv_scores'])) m=np.arange(1,13) fig4=plt.plot(m,n) fig4=plt.title('SFBS: Mean CV Scores vs No of features') fig4.figure.savefig('fig4.png', bbox_inches='tight') print(pd.DataFrame.from_dict(sffs.get_metric_dict(confidence_interval=0.90)).T) # Get the index of the minimum CV score idx = np.argmin(n) print "No of features=",idx #Get the features indices for the best backward floating fit and convert to list b=list(a[idx]['feature_idx']) print(b) print("#################################################################################") # Index the column names. # Features from forward fit print("Features selected in backward floating fit") print(X.columns[b]) ## avg_score ci_bound cv_scores \ ## 1 -42.6185 19.0465 [-23.5582499971, -41.8215743748, -73.993608929... ## 2 -36.0651 16.3184 [-18.002498199, -40.1507894517, -56.5286659068... ## 3 -34.1001 20.87 [-9.43012884381, -25.9584955394, -36.184188174... ## 4 -33.463 12.4081 [-20.6415333292, -37.3247852146, -47.479302977... ## 5 -32.3699 11.2725 [-20.8771078371, -34.9825657934, -45.813447203... ## 6 -31.6742 11.2458 [-20.3082500364, -33.2288990522, -45.535507868... ## 7 -30.7133 9.23881 [-19.4425181917, -31.1742902259, -40.531266671... ## 8 -29.7432 9.84468 [-19.445277268, -30.0641187173, -40.2561247122... ## 9 -29.0878 9.45027 [-19.3545569877, -30.094768669, -39.7506036377... ## 10 -28.9225 9.39697 [-18.562171585, -29.968504938, -39.9586835965,... ## 11 -29.4301 10.8831 [-18.3346152225, -30.3312847532, -45.065432793... ## 12 -30.4589 11.1486 [-18.493389527, -35.0290639374, -45.1558231765... ## 13 -37.1318 23.2657 [-12.4603005692, -26.0486211062, -33.074137979... ## ## feature_idx std_dev std_err ## 1 (12,) 18.9042 9.45212 ## 2 (10, 12) 16.1965 8.09826 ## 3 (10, 12, 5) 20.7142 10.3571 ## 4 (4, 10, 7, 12) 12.3154 6.15772 ## 5 (12, 10, 4, 1, 7) 11.1883 5.59417 ## 6 (4, 7, 8, 10, 11, 12) 11.1618 5.58088 ## 7 (1, 4, 7, 8, 9, 10, 12) 9.16981 4.58491 ## 8 (1, 4, 7, 8, 9, 10, 11, 12) 9.77116 4.88558 ## 9 (0, 1, 4, 7, 8, 9, 10, 11, 12) 9.37969 4.68985 ## 10 (0, 1, 4, 6, 7, 8, 9, 10, 11, 12) 9.3268 4.6634 ## 11 (0, 1, 3, 4, 6, 7, 8, 9, 10, 11, 12) 10.8018 5.40092 ## 12 (0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12) 11.0653 5.53265 ## 13 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12) 23.0919 11.546 ## No of features= 9 ## [0, 1, 4, 7, 8, 9, 10, 11, 12] ## ################################################################################# ## Features selected in backward floating fit ## Index([u'crimeRate', u'zone', u'NO2', u'distances', u'idxHighways', u'taxRate', ## u'teacherRatio', u'color', u'status'], ## dtype='object') # 1.4 Ridge regression In Linear Regression the Residual Sum of Squares (RSS) is given as $RSS = \sum_{i=1}^{n} (y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_jx_{ij})^{2}$ Ridge regularization =$\sum_{i=1}^{n} (y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_jx_{ij})^{2} + \lambda \sum_{j=1}^{p}\beta^{2}$ where is the regularization or tuning parameter. Increasing increases the penalty on the coefficients thus shrinking them. However in Ridge Regression features that do not influence the target variable will shrink closer to zero but never become zero except for very large values of Ridge regression in R requires the ‘glmnet’ package ## 1.4a Ridge Regression – R code library(glmnet) library(dplyr) # Read the data df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL #Rename the columns names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") # Select specific columns df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age", "distances","highways","tax","teacherRatio","color","status","cost") # Set X and y as matrices X=as.matrix(df1[,1:13]) y=df1$cost

# Fit a Ridge model
fitRidge <-glmnet(X,y,alpha=0)

#Plot the model where the coefficient shrinkage is plotted vs log lambda
plot(fitRidge,xvar="lambda",label=TRUE,main= "Ridge regression coefficient shrikage vs log lambda")

The plot below shows how the 13 coefficients for the 13 predictors vary when lambda is increased. The x-axis includes log (lambda). We can see that increasing lambda from $10^{2}$ to $10^{6}$ significantly shrinks the coefficients. We can draw a vertical line from the x-axis and read the values of the 13 coefficients. Some of them will be close to zero

# Compute the cross validation error
cvRidge=cv.glmnet(X,y,alpha=0)

#Plot the cross validation error
plot(cvRidge, main="Ridge regression Cross Validation Error (10 fold)")

This gives the 10 fold Cross Validation  Error with respect to log (lambda) As lambda increase the MSE increases

## 1.4a Ridge Regression – Python code

The coefficient shrinkage for Python can be plotted like R using Least Angle Regression model a.k.a. LARS package. This is included below

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

#Rename the columns
df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age",
"distances","idxHighways","taxRate","teacherRatio","color","status","cost"]
X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age",
"distances","idxHighways","taxRate","teacherRatio","color","status"]]
y=df['cost']
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()

from sklearn.linear_model import Ridge
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state = 0)

# Scale the X_train and X_test
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Fit a ridge regression with alpha=20
linridge = Ridge(alpha=20.0).fit(X_train_scaled, y_train)

# Print the training R squared
print('R-squared score (training): {:.3f}'
.format(linridge.score(X_train_scaled, y_train)))
# Print the test Rsquared
print('R-squared score (test): {:.3f}'
.format(linridge.score(X_test_scaled, y_test)))
print('Number of non-zero features: {}'
.format(np.sum(linridge.coef_ != 0)))

trainingRsquared=[]
testRsquared=[]
# Plot the effect of alpha on the test Rsquared
print('Ridge regression: effect of alpha regularization parameter\n')
# Choose a list of alpha values
for this_alpha in [0.001,.01,.1,0, 1, 10, 20, 50, 100, 1000]:
linridge = Ridge(alpha = this_alpha).fit(X_train_scaled, y_train)
# Compute training rsquared
r2_train = linridge.score(X_train_scaled, y_train)
# Compute test rsqaured
r2_test = linridge.score(X_test_scaled, y_test)
num_coeff_bigger = np.sum(abs(linridge.coef_) > 1.0)
trainingRsquared.append(r2_train)
testRsquared.append(r2_test)

# Create a dataframe
alpha=[0.001,.01,.1,0, 1, 10, 20, 50, 100, 1000]
trainingRsquared=pd.DataFrame(trainingRsquared,index=alpha)
testRsquared=pd.DataFrame(testRsquared,index=alpha)

# Plot training and test R squared as a function of alpha
df3=pd.concat([trainingRsquared,testRsquared],axis=1)
df3.columns=['trainingRsquared','testRsquared']

fig5=df3.plot()
fig5=plt.title('Ridge training and test squared error vs Alpha')
fig5.figure.savefig('fig5.png', bbox_inches='tight')

# Plot the coefficient shrinage using the LARS package

from sklearn import linear_model
# #############################################################################
# Compute paths

n_alphas = 200
alphas = np.logspace(0, 8, n_alphas)

coefs = []
for a in alphas:
ridge = linear_model.Ridge(alpha=a, fit_intercept=False)
ridge.fit(X_train_scaled, y_train)
coefs.append(ridge.coef_)

# #############################################################################
# Display results

ax = plt.gca()

fig6=ax.plot(alphas, coefs)
fig6=ax.set_xscale('log')
fig6=ax.set_xlim(ax.get_xlim()[::-1])  # reverse axis
fig6=plt.xlabel('alpha')
fig6=plt.ylabel('weights')
fig6=plt.title('Ridge coefficients as a function of the regularization')
fig6=plt.axis('tight')
plt.savefig('fig6.png', bbox_inches='tight')

## R-squared score (training): 0.620
## R-squared score (test): 0.438
## Number of non-zero features: 13
## Ridge regression: effect of alpha regularization parameter

The plot below shows the training and test error when increasing the tuning or regularization parameter ‘alpha’

For Python the coefficient shrinkage with LARS must be viewed from right to left, where you have increasing alpha. As alpha increases the coefficients shrink to 0.

## 1.5 Lasso regularization

The Lasso is another form of regularization, also known as L1 regularization. Unlike the Ridge Regression where the coefficients of features which do not influence the target tend to zero, in the lasso regualrization the coefficients become 0. The general form of Lasso is as follows

$\sum_{i=1}^{n} (y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_jx_{ij})^{2} + \lambda \sum_{j=1}^{p}|\beta|$

## 1.5a Lasso regularization – R code

library(glmnet)
library(dplyr)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL
names(df) <-c("no","crimeRate","zone","indus","charles","nox","rooms","age",
"distances","highways","tax","teacherRatio","color","status","cost")
df1 <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",
"distances","highways","tax","teacherRatio","color","status","cost")

# Set X and y as matrices
X=as.matrix(df1[,1:13])
y=df1$cost # Fit the lasso model fitLasso <- glmnet(X,y) # Plot the coefficient shrinkage as a function of log(lambda) plot(fitLasso,xvar="lambda",label=TRUE,main="Lasso regularization - Coefficient shrinkage vs log lambda") The plot below shows that in L1 regularization the coefficients actually become zero with increasing lambda # Compute the cross validation error (10 fold) cvLasso=cv.glmnet(X,y,alpha=0) # Plot the cross validation error plot(cvLasso) This gives the MSE for the lasso model ## 1.5 b Lasso regularization – Python code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import Lasso from sklearn.preprocessing import MinMaxScaler from sklearn import linear_model scaler = MinMaxScaler() df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") #Rename the columns df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status","cost"] X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status"]] y=df['cost'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) linlasso = Lasso(alpha=0.1, max_iter = 10).fit(X_train_scaled, y_train) print('Non-zero features: {}' .format(np.sum(linlasso.coef_ != 0))) print('R-squared score (training): {:.3f}' .format(linlasso.score(X_train_scaled, y_train))) print('R-squared score (test): {:.3f}\n' .format(linlasso.score(X_test_scaled, y_test))) print('Features with non-zero weight (sorted by absolute magnitude):') for e in sorted (list(zip(list(X), linlasso.coef_)), key = lambda e: -abs(e[1])): if e[1] != 0: print('\t{}, {:.3f}'.format(e[0], e[1])) print('Lasso regression: effect of alpha regularization\n\ parameter on number of features kept in final model\n') trainingRsquared=[] testRsquared=[] #for alpha in [0.01,0.05,0.1, 1, 2, 3, 5, 10, 20, 50]: for alpha in [0.01,0.07,0.05, 0.1, 1,2, 3, 5, 10]: linlasso = Lasso(alpha, max_iter = 10000).fit(X_train_scaled, y_train) r2_train = linlasso.score(X_train_scaled, y_train) r2_test = linlasso.score(X_test_scaled, y_test) trainingRsquared.append(r2_train) testRsquared.append(r2_test) alpha=[0.01,0.07,0.05, 0.1, 1,2, 3, 5, 10] #alpha=[0.01,0.05,0.1, 1, 2, 3, 5, 10, 20, 50] trainingRsquared=pd.DataFrame(trainingRsquared,index=alpha) testRsquared=pd.DataFrame(testRsquared,index=alpha) df3=pd.concat([trainingRsquared,testRsquared],axis=1) df3.columns=['trainingRsquared','testRsquared'] fig7=df3.plot() fig7=plt.title('LASSO training and test squared error vs Alpha') fig7.figure.savefig('fig7.png', bbox_inches='tight')  ## Non-zero features: 7 ## R-squared score (training): 0.726 ## R-squared score (test): 0.561 ## ## Features with non-zero weight (sorted by absolute magnitude): ## status, -18.361 ## rooms, 18.232 ## teacherRatio, -8.628 ## taxRate, -2.045 ## color, 1.888 ## chasRiver, 1.670 ## distances, -0.529 ## Lasso regression: effect of alpha regularization ## parameter on number of features kept in final model ## ## Computing regularization path using the LARS ... ## .C:\Users\Ganesh\ANACON~1\lib\site-packages\sklearn\linear_model\coordinate_descent.py:484: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ## ConvergenceWarning) ## 1.5c Lasso coefficient shrinkage – Python code To plot the coefficient shrinkage for Lasso the Least Angle Regression model a.k.a. LARS package. This is shown below import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import Lasso from sklearn.preprocessing import MinMaxScaler from sklearn import linear_model scaler = MinMaxScaler() df = pd.read_csv("Boston.csv",encoding = "ISO-8859-1") #Rename the columns df.columns=["no","crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status","cost"] X=df[["crimeRate","zone","indus","chasRiver","NO2","rooms","age", "distances","idxHighways","taxRate","teacherRatio","color","status"]] y=df['cost'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) print("Computing regularization path using the LARS ...") alphas, _, coefs = linear_model.lars_path(X_train_scaled, y_train, method='lasso', verbose=True) xx = np.sum(np.abs(coefs.T), axis=1) xx /= xx[-1] fig8=plt.plot(xx, coefs.T) ymin, ymax = plt.ylim() fig8=plt.vlines(xx, ymin, ymax, linestyle='dashed') fig8=plt.xlabel('|coef| / max|coef|') fig8=plt.ylabel('Coefficients') fig8=plt.title('LASSO Path - Coefficient Shrinkage vs L1') fig8=plt.axis('tight') plt.savefig('fig8.png', bbox_inches='tight')  This 3rd part of the series covers the main ‘feature selection’ methods. I hope these posts serve as a quick and useful reference to ML code both for R and Python! Stay tuned for further updates to this series! Watch this space! You may also like To see all posts see Index of posts # Practical Machine Learning with R and Python – Part 2 In this 2nd part of the series “Practical Machine Learning with R and Python – Part 2”, I continue where I left off in my first post Practical Machine Learning with R and Python – Part 2. In this post I cover the some classification algorithmns and cross validation. Specifically I touch -Logistic Regression -K Nearest Neighbors (KNN) classification -Leave out one Cross Validation (LOOCV) -K Fold Cross Validation in both R and Python. As in my initial post the algorithms are based on the following courses. You can download this R Markdown file along with the data from Github. I hope these posts can be used as a quick reference in R and Python and Machine Learning.I have tried to include the coolest part of either course in this post. The content of this post and much more is now available as a compact book on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo The following classification problem is based on Logistic Regression. The data is an included data set in Scikit-Learn, which I have saved as csv and use it also for R. The fit of a classification Machine Learning Model depends on how correctly classifies the data. There are several measures of testing a model’s classification performance. They are Accuracy = TP + TN / (TP + TN + FP + FN) – Fraction of all classes correctly classified Precision = TP / (TP + FP) – Fraction of correctly classified positives among those classified as positive Recall = TP / (TP + FN) Also known as sensitivity, or True Positive Rate (True positive) – Fraction of correctly classified as positive among all positives in the data F1 = 2 * Precision * Recall / (Precision + Recall) ## 1a. Logistic Regression – R code The caret and e1071 package is required for using the confusionMatrix call source("RFunctions.R") library(dplyr) library(caret) library(e1071) # Read the data (from sklearn) cancer <- read.csv("cancer.csv") # Rename the target variable names(cancer) <- c(seq(1,30),"output") # Split as training and test sets train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5) train <- cancer[train_idx, ] test <- cancer[-train_idx, ] # Fit a generalized linear logistic model, fit=glm(output~.,family=binomial,data=train,control = list(maxit = 50)) # Predict the output from the model a=predict(fit,newdata=train,type="response") # Set response >0.5 as 1 and <=0.5 as 0 b=ifelse(a>0.5,1,0) # Compute the confusion matrix for training data confusionMatrix(b,train$output)
## Confusion Matrix and Statistics
##
##           Reference
## Prediction   0   1
##          0 154   0
##          1   0 272
##
##                Accuracy : 1
##                  95% CI : (0.9914, 1)
##     No Information Rate : 0.6385
##     P-Value [Acc > NIR] : < 2.2e-16
##
##                   Kappa : 1
##  Mcnemar's Test P-Value : NA
##
##             Sensitivity : 1.0000
##             Specificity : 1.0000
##          Pos Pred Value : 1.0000
##          Neg Pred Value : 1.0000
##              Prevalence : 0.3615
##          Detection Rate : 0.3615
##    Detection Prevalence : 0.3615
##       Balanced Accuracy : 1.0000
##
##        'Positive' Class : 0
## 
m=predict(fit,newdata=test,type="response")
n=ifelse(m>0.5,1,0)
# Compute the confusion matrix for test output
confusionMatrix(n,test$output) ## Confusion Matrix and Statistics ## ## Reference ## Prediction 0 1 ## 0 52 4 ## 1 5 81 ## ## Accuracy : 0.9366 ## 95% CI : (0.8831, 0.9706) ## No Information Rate : 0.5986 ## P-Value [Acc > NIR] : <2e-16 ## ## Kappa : 0.8677 ## Mcnemar's Test P-Value : 1 ## ## Sensitivity : 0.9123 ## Specificity : 0.9529 ## Pos Pred Value : 0.9286 ## Neg Pred Value : 0.9419 ## Prevalence : 0.4014 ## Detection Rate : 0.3662 ## Detection Prevalence : 0.3944 ## Balanced Accuracy : 0.9326 ## ## 'Positive' Class : 0 ##  ## 1b. Logistic Regression – Python code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression os.chdir("C:\\Users\\Ganesh\\RandPython") from sklearn.datasets import make_classification, make_blobs from sklearn.metrics import confusion_matrix from matplotlib.colors import ListedColormap from sklearn.datasets import load_breast_cancer # Load the cancer data (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) # Call the Logisitic Regression function clf = LogisticRegression().fit(X_train, y_train) fig, subaxes = plt.subplots(1, 1, figsize=(7, 5)) # Fit a model clf = LogisticRegression().fit(X_train, y_train) # Compute and print the Accuray scores print('Accuracy of Logistic regression classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Logistic regression classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) y_predicted=clf.predict(X_test) # Compute and print confusion matrix confusion = confusion_matrix(y_test, y_predicted) from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score print('Accuracy: {:.2f}'.format(accuracy_score(y_test, y_predicted))) print('Precision: {:.2f}'.format(precision_score(y_test, y_predicted))) print('Recall: {:.2f}'.format(recall_score(y_test, y_predicted))) print('F1: {:.2f}'.format(f1_score(y_test, y_predicted))) ## Accuracy of Logistic regression classifier on training set: 0.96 ## Accuracy of Logistic regression classifier on test set: 0.96 ## Accuracy: 0.96 ## Precision: 0.99 ## Recall: 0.94 ## F1: 0.97 ## 2. Dummy variables The following R and Python code show how dummy variables are handled in R and Python. Dummy variables are categorival variables which have to be converted into appropriate values before using them in Machine Learning Model For e.g. if we had currency as ‘dollar’, ‘rupee’ and ‘yen’ then the dummy variable will convert this as dollar 0 0 0 rupee 0 0 1 yen 0 1 0 ## 2a. Logistic Regression with dummy variables- R code # Load the dummies library library(dummies)  df <- read.csv("adult1.csv",stringsAsFactors = FALSE,na.strings = c(""," "," ?")) # Remove rows which have NA df1 <- df[complete.cases(df),] dim(df1) ## [1] 30161 16 # Select specific columns adult <- df1 %>% dplyr::select(age,occupation,education,educationNum,capitalGain, capital.loss,hours.per.week,native.country,salary) # Set the dummy data with appropriate values adult1 <- dummy.data.frame(adult, sep = ".") #Split as training and test train_idx <- trainTestSplit(adult1,trainPercent=75,seed=1111) train <- adult1[train_idx, ] test <- adult1[-train_idx, ] # Fit a binomial logistic regression fit=glm(salary~.,family=binomial,data=train) # Predict response a=predict(fit,newdata=train,type="response") # If response >0.5 then it is a 1 and 0 otherwise b=ifelse(a>0.5,1,0) confusionMatrix(b,train$salary)
## Confusion Matrix and Statistics
##
##           Reference
## Prediction     0     1
##          0 16065  3145
##          1   968  2442
##
##                Accuracy : 0.8182
##                  95% CI : (0.8131, 0.8232)
##     No Information Rate : 0.753
##     P-Value [Acc > NIR] : < 2.2e-16
##
##                   Kappa : 0.4375
##  Mcnemar's Test P-Value : < 2.2e-16
##
##             Sensitivity : 0.9432
##             Specificity : 0.4371
##          Pos Pred Value : 0.8363
##          Neg Pred Value : 0.7161
##              Prevalence : 0.7530
##          Detection Rate : 0.7102
##    Detection Prevalence : 0.8492
##       Balanced Accuracy : 0.6901
##
##        'Positive' Class : 0
## 
# Compute and display confusion matrix
m=predict(fit,newdata=test,type="response")
## Warning in predict.lm(object, newdata, se.fit, scale = 1, type =
## ifelse(type == : prediction from a rank-deficient fit may be misleading
n=ifelse(m>0.5,1,0)
print(a$overall[1]) } ## Accuracy ## 0.7835831 ## Accuracy ## 0.8162047 ## Accuracy ## 0.8089113 ## Accuracy ## 0.8209787 ## Accuracy ## 0.8184591 #Plot the Accuracy for each of the KNN models df <- data.frame(neighbors,Accuracy=cMat) ggplot(df,aes(x=neighbors,y=Accuracy)) + geom_point() +geom_line(color="blue") + xlab("Number of neighbors") + ylab("Accuracy") + ggtitle("KNN regression - Accuracy vs Number of Neighors (Unnormalized)") ## 3b – K Nearest Neighbors Classification – Python code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import MinMaxScaler # Read data df =pd.read_csv("adult1.csv",encoding="ISO-8859-1",na_values=[""," "," ?"]) df1=df.dropna() print(df1.shape) # Select specific columns adult = df1[['age','occupation','education','educationNum','capitalGain','capital-loss', 'hours-per-week','native-country','salary']] X=adult[['age','occupation','education','educationNum','capitalGain','capital-loss', 'hours-per-week','native-country']] #Set values for dummy variables X_adult=pd.get_dummies(X,columns=['occupation','education','native-country']) y=adult['salary'] X_adult_train, X_adult_test, y_train, y_test = train_test_split(X_adult, y, random_state = 0) # KNN classification in Python requires the data to be scaled. # Scale the data scaler = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_adult_train) # Apply scaling to test set also X_test_scaled = scaler.transform(X_adult_test) # Compute the KNN model for 1,3,5,10 & 15 neighbors accuracy=[] neighbors=[1,3,5,10,15] for i in neighbors: knn = KNeighborsClassifier(n_neighbors = i) knn.fit(X_train_scaled, y_train) accuracy.append(knn.score(X_test_scaled, y_test)) print('Accuracy test score: {:.3f}' .format(knn.score(X_test_scaled, y_test))) # Plot the models with the Accuracy attained for each of these models fig1=plt.plot(neighbors,accuracy) fig1=plt.title("KNN regression - Accuracy vs Number of neighbors") fig1=plt.xlabel("Neighbors") fig1=plt.ylabel("Accuracy") fig1.figure.savefig('foo1.png', bbox_inches='tight') ## (30161, 16) ## Accuracy test score: 0.749 ## Accuracy test score: 0.779 ## Accuracy test score: 0.793 ## Accuracy test score: 0.804 ## Accuracy test score: 0.803 Output image: ## 4 MPG vs Horsepower The following scatter plot shows the non-linear relation between mpg and horsepower. This will be used as the data input for computing K Fold Cross Validation Error ## 4a MPG vs Horsepower scatter plot – R Code df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI df1 <- as.data.frame(sapply(df,as.numeric)) df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg) df3 <- df2[complete.cases(df2),] ggplot(df3,aes(x=horsepower,y=mpg)) + geom_point() + xlab("Horsepower") + ylab("Miles Per gallon") + ggtitle("Miles per Gallon vs Hosrsepower") ## 4b MPG vs Horsepower scatter plot – Python Code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1") autoDF.shape autoDF.columns autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']] autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce') autoDF3=autoDF2.dropna() autoDF3.shape #X=autoDF3[['cylinder','displacement','horsepower','weight']] X=autoDF3[['horsepower']] y=autoDF3['mpg'] fig11=plt.scatter(X,y) fig11=plt.title("KNN regression - Accuracy vs Number of neighbors") fig11=plt.xlabel("Neighbors") fig11=plt.ylabel("Accuracy") fig11.figure.savefig('foo11.png', bbox_inches='tight')  ## 5 K Fold Cross Validation K Fold Cross Validation is a technique in which the data set is divided into K Folds or K partitions. The Machine Learning model is trained on K-1 folds and tested on the Kth fold i.e. we will have K-1 folds for training data and 1 for testing the ML model. Since we can partition this as $C_{1}^{K}$ or K choose 1, there will be K such partitions. The K Fold Cross Validation estimates the average validation error that we can expect on a new unseen test data. The formula for K Fold Cross validation is as follows $MSE_{K} = \frac{\sum (y-yhat)^{2}}{n_{K}}$ and $n_{K} = \frac{N}{K}$ and $CV_{K} = \sum_{K=1}^{K} (\frac{n_{K}}{N}) MSE_{K}$ where $n_{K}$ is the number of elements in partition ‘K’ and N is the total number of elements $CV_{K} =\sum_{K=1}^{K} MSE_{K}$ $CV_{K} =\frac{\sum_{K=1}^{K} MSE_{K}}{K}$ Leave Out one Cross Validation (LOOCV) is a special case of K Fold Cross Validation where N-1 data points are used to train the model and 1 data point is used to test the model. There are N such paritions of N-1 & 1 that are possible. The mean error is measured The Cross Valifation Error for LOOCV is $CV_{N} = \frac{1}{n} *\frac{\sum_{1}^{n}(y-yhat)^{2}}{1-h_{i}}$ where $h_{i}$ is the diagonal hat matrix see [Statistical Learning] The above formula is also included in this blog post It took me a day and a half to implement the K Fold Cross Validation formula. I think it is correct. In any case do let me know if you think it is off ## 5a. Leave out one cross validation (LOOCV) – R Code R uses the package ‘boot’ for performing Cross Validation error computation library(boot) library(reshape2) # Read data df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI df1 <- as.data.frame(sapply(df,as.numeric)) # Select complete cases df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg) df3 <- df2[complete.cases(df2),] set.seed(17) cv.error=rep(0,10) # For polynomials 1,2,3... 10 fit a LOOCV model for (i in 1:10){ glm.fit=glm(mpg~poly(horsepower,i),data=df3) cv.error[i]=cv.glm(df3,glm.fit)$delta[1]

}
cv.error
##  [1] 24.23151 19.24821 19.33498 19.42443 19.03321 18.97864 18.83305
##  [8] 18.96115 19.06863 19.49093
# Create and display a plot
folds <- seq(1,10)
df <- data.frame(folds,cvError=cv.error)
ggplot(df,aes(x=folds,y=cvError)) + geom_point() +geom_line(color="blue") +
xlab("Degree of Polynomial") + ylab("Cross Validation Error") +
ggtitle("Leave one out Cross Validation - Cross Validation Error vs Degree of Polynomial")

## 5b. Leave out one cross validation (LOOCV) – Python Code

In Python there is no available function to compute Cross Validation error and we have to compute the above formula. I have done this after several hours. I think it is now in reasonable shape. Do let me know if you think otherwise. For LOOCV I use the K Fold Cross Validation with K=N

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split, KFold
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
# Remove rows with NAs
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['horsepower']]
y=autoDF3['mpg']

# For polynomial degree 1,2,3... 10
def computeCVError(X,y,folds):
deg=[]
mse=[]
degree1=[1,2,3,4,5,6,7,8,9,10]

nK=len(X)/float(folds)
xval_err=0
# For degree 'j'
for j in degree1:
# Split as 'folds'
kf = KFold(len(X),n_folds=folds)
for train_index, test_index in kf:
# Create the appropriate train and test partitions from the fold index
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]

# For the polynomial degree 'j'
poly = PolynomialFeatures(degree=j)
# Transform the X_train and X_test
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.fit_transform(X_test)
# Fit a model on the transformed data
linreg = LinearRegression().fit(X_train_poly, y_train)
# Compute yhat or ypred
y_pred = linreg.predict(X_test_poly)
# Compute MSE * n_K/N
test_mse = mean_squared_error(y_test, y_pred)*float(len(X_train))/float(len(X))
# Add the test_mse for this partition of the data
mse.append(test_mse)
# Compute the mean of all folds for degree 'j'
deg.append(np.mean(mse))

return(deg)

df=pd.DataFrame()
print(len(X))
# Call the function once. For LOOCV K=N. hence len(X) is passed as number of folds
cvError=computeCVError(X,y,len(X))

# Create and plot LOOCV
df=pd.DataFrame(cvError)
fig3=df.plot()
fig3=plt.title("Leave one out Cross Validation - Cross Validation Error vs Degree of Polynomial")
fig3=plt.xlabel("Degree of Polynomial")
fig3=plt.ylabel("Cross validation Error")
fig3.figure.savefig('foo3.png', bbox_inches='tight')

## 6a K Fold Cross Validation – R code

Here K Fold Cross Validation is done for 4, 5 and 10 folds using the R package boot and the glm package

library(boot)
library(reshape2)
set.seed(17)
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]
a=matrix(rep(0,30),nrow=3,ncol=10)
set.seed(17)
# Set the folds as 4,5 and 10
folds<-c(4,5,10)
for(i in seq_along(folds)){
cv.error.10=rep(0,10)
for (j in 1:10){
# Fit a generalized linear model
glm.fit=glm(mpg~poly(horsepower,j),data=df3)
# Compute K Fold Validation error
a[i,j]=cv.glm(df3,glm.fit,K=folds[i])$delta[1] } } # Create and display the K Fold Cross Validation Error b <- t(a) df <- data.frame(b) df1 <- cbind(seq(1,10),df) names(df1) <- c("PolynomialDegree","4-fold","5-fold","10-fold") df2 <- melt(df1,id="PolynomialDegree") ggplot(df2) + geom_line(aes(x=PolynomialDegree, y=value, colour=variable),size=2) + xlab("Degree of Polynomial") + ylab("Cross Validation Error") + ggtitle("K Fold Cross Validation - Cross Validation Error vs Degree of Polynomial") ## 6b. K Fold Cross Validation – Python code The implementation of K-Fold Cross Validation Error has to be implemented and I have done this below. There is a small discrepancy in the shapes of the curves with the R plot above. Not sure why! import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.cross_validation import train_test_split, KFold from sklearn.preprocessing import PolynomialFeatures from sklearn.metrics import mean_squared_error # Read data autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1") autoDF.shape autoDF.columns autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']] autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce') # Drop NA rows autoDF3=autoDF2.dropna() autoDF3.shape #X=autoDF3[['cylinder','displacement','horsepower','weight']] X=autoDF3[['horsepower']] y=autoDF3['mpg'] # Create Cross Validation function def computeCVError(X,y,folds): deg=[] mse=[] # For degree 1,2,3,..10 degree1=[1,2,3,4,5,6,7,8,9,10] nK=len(X)/float(folds) xval_err=0 for j in degree1: # Split the data into 'folds' kf = KFold(len(X),n_folds=folds) for train_index, test_index in kf: # Partition the data acccording the fold indices generated X_train, X_test = X.iloc[train_index], X.iloc[test_index] y_train, y_test = y.iloc[train_index], y.iloc[test_index] # Scale the X_train and X_test as per the polynomial degree 'j' poly = PolynomialFeatures(degree=j) X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.fit_transform(X_test) # Fit a polynomial regression linreg = LinearRegression().fit(X_train_poly, y_train) # Compute yhat or ypred y_pred = linreg.predict(X_test_poly) # Compute MSE *(nK/N) test_mse = mean_squared_error(y_test, y_pred)*float(len(X_train))/float(len(X)) # Append to list for different folds mse.append(test_mse) # Compute the mean for poylnomial 'j' deg.append(np.mean(mse)) return(deg) # Create and display a plot of K -Folds df=pd.DataFrame() for folds in [4,5,10]: cvError=computeCVError(X,y,folds) #print(cvError) df1=pd.DataFrame(cvError) df=pd.concat([df,df1],axis=1) #print(cvError) df.columns=['4-fold','5-fold','10-fold'] df=df.reindex([1,2,3,4,5,6,7,8,9,10]) df fig2=df.plot() fig2=plt.title("K Fold Cross Validation - Cross Validation Error vs Degree of Polynomial") fig2=plt.xlabel("Degree of Polynomial") fig2=plt.ylabel("Cross validation Error") fig2.figure.savefig('foo2.png', bbox_inches='tight')  This concludes this 2nd part of this series. I will look into model tuning and model selection in R and Python in the coming parts. Comments, suggestions and corrections are welcome! To be continued…. Watch this space! Also see To see all posts see Index of posts # Practical Machine Learning with R and Python – Part 1 # Introduction This is the 1st part of a series of posts I intend to write on some common Machine Learning Algorithms in R and Python. In this first part I cover the following Machine Learning Algorithms • Univariate Regression • Multivariate Regression • Polynomial Regression • K Nearest Neighbors Regression The code includes the implementation in both R and Python. This series of posts are based on the following 2 MOOC courses I did at Stanford Online and at Coursera 1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford 2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera I have used the data sets from UCI Machine Learning repository(Communities and Crime and Auto MPG). I also use the Boston data set from MASS package The content of this post and much more is now available as a compact book on Amazon in both formats – as Paperback ($9.99) and a Kindle version($6.99/Rs449/). see ‘Practical Machine Learning with R and Python – Machine Learning in stereo While coding in R and Python I found that there were some aspects that were more convenient in one language and some in the other. For example, plotting the fit in R is straightforward in R, while computing the R squared, splitting as Train & Test sets etc. are already available in Python. In any case, these minor inconveniences can be easily be implemented in either language. R squared computation in R is computed as follows $RSS=\sum (y-yhat)^{2}$ $TSS= \sum(y-mean(y))^{2}$ $Rsquared- 1-\frac{RSS}{TSS}$ Note: You can download this R Markdown file and the associated data sets from Github at MachineLearning-RandPython Note 1: This post was created as an R Markdown file in RStudio which has a cool feature of including R and Python snippets. The plot of matplotlib needs a workaround but otherwise this is a real cool feature of RStudio! ## 1.1a Univariate Regression – R code Here a simple linear regression line is fitted between a single input feature and the target variable # Source in the R function library source("RFunctions.R") # Read the Boston data file df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - Statistical Learning # Split the data into training and test sets (75:25) train_idx <- trainTestSplit(df,trainPercent=75,seed=5) train <- df[train_idx, ] test <- df[-train_idx, ] # Fit a linear regression line between 'Median value of owner occupied homes' vs 'lower status of # population' fit=lm(medv~lstat,data=df) # Display details of fir summary(fit) ## ## Call: ## lm(formula = medv ~ lstat, data = df) ## ## Residuals: ## Min 1Q Median 3Q Max ## -15.168 -3.990 -1.318 2.034 24.500 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 34.55384 0.56263 61.41 <2e-16 *** ## lstat -0.95005 0.03873 -24.53 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 6.216 on 504 degrees of freedom ## Multiple R-squared: 0.5441, Adjusted R-squared: 0.5432 ## F-statistic: 601.6 on 1 and 504 DF, p-value: < 2.2e-16 # Display the confidence intervals confint(fit) ## 2.5 % 97.5 % ## (Intercept) 33.448457 35.6592247 ## lstat -1.026148 -0.8739505 plot(df$lstat,df$medv, xlab="Lower status (%)",ylab="Median value of owned homes ($1000)", main="Median value of homes ($1000) vs Lowe status (%)") abline(fit) abline(fit,lwd=3) abline(fit,lwd=3,col="red") rsquared=Rsquared(fit,test,test$medv)
sprintf("R-squared for uni-variate regression (Boston.csv)  is : %f", rsquared)
## [1] "R-squared for uni-variate regression (Boston.csv)  is : 0.556964"

## 1.1b Univariate Regression – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
#os.chdir("C:\\software\\machine-learning\\RandPython")

# Select the feature variable
X=df['lstat']

# Select the target
y=df['medv']

# Split into train and test sets (75:25)
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state = 0)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)

# Fit a linear model
linreg = LinearRegression().fit(X_train, y_train)

# Print the training and test R squared score
print('R-squared score (training): {:.3f}'.format(linreg.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'.format(linreg.score(X_test, y_test)))

# Plot the linear regression line
fig=plt.scatter(X_train,y_train)

# Create a range of points. Compute yhat=coeff1*x + intercept and plot
x=np.linspace(0,40,20)
fig1=plt.plot(x, linreg.coef_ * x + linreg.intercept_, color='red')
fig1=plt.title("Median value of homes ($1000) vs Lowe status (%)") fig1=plt.xlabel("Lower status (%)") fig1=plt.ylabel("Median value of owned homes ($1000)")
fig.figure.savefig('foo.png', bbox_inches='tight')
fig1.figure.savefig('foo1.png', bbox_inches='tight')
print "Finished"

## R-squared score (training): 0.571
## R-squared score (test): 0.458
## Finished

## 1.2a Multivariate Regression – R code

# Read crimes data

# Remove the 1st 7 columns which do not impact output
crimesDF1 <- crimesDF[,7:length(crimesDF)]

# Convert all to numeric
crimesDF2 <- sapply(crimesDF1,as.numeric)

# Check for NAs
a <- is.na(crimesDF2)
# Set to 0 as an imputation
crimesDF2[a] <-0
#Create as a dataframe
crimesDF2 <- as.data.frame(crimesDF2)
#Create a train/test split
train_idx <- trainTestSplit(crimesDF2,trainPercent=75,seed=5)
train <- crimesDF2[train_idx, ]
test <- crimesDF2[-train_idx, ]

# Fit a multivariate regression model between crimesPerPop and all other features
fit <- lm(ViolentCrimesPerPop~.,data=train)

# Compute and print R Squared
rsquared=Rsquared(fit,test,test$ViolentCrimesPerPop) sprintf("R-squared for multi-variate regression (crimes.csv) is : %f", rsquared) ## [1] "R-squared for multi-variate regression (crimes.csv) is : 0.653940" ## 1.2b Multivariate Regression – Python code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression # Read the data crimesDF =pd.read_csv("crimes.csv",encoding="ISO-8859-1") #Remove the 1st 7 columns crimesDF1=crimesDF.iloc[:,7:crimesDF.shape[1]] # Convert to numeric crimesDF2 = crimesDF1.apply(pd.to_numeric, errors='coerce') # Impute NA to 0s crimesDF2.fillna(0, inplace=True) # Select the X (feature vatiables - all) X=crimesDF2.iloc[:,0:120] # Set the target y=crimesDF2.iloc[:,121] X_train, X_test, y_train, y_test = train_test_split(X, y,random_state = 0) # Fit a multivariate regression model linreg = LinearRegression().fit(X_train, y_train) # compute and print the R Square print('R-squared score (training): {:.3f}'.format(linreg.score(X_train, y_train))) print('R-squared score (test): {:.3f}'.format(linreg.score(X_test, y_test))) ## R-squared score (training): 0.699 ## R-squared score (test): 0.677 ## 1.3a Polynomial Regression – R For Polynomial regression , polynomials of degree 1,2 & 3 are used and R squared is computed. It can be seen that the quadaratic model provides the best R squared score and hence the best fit  # Polynomial degree 1 df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI df1 <- as.data.frame(sapply(df,as.numeric)) # Select key columns df2 <- df1 %>% select(cylinder,displacement, horsepower,weight, acceleration, year,mpg) df3 <- df2[complete.cases(df2),] # Split as train and test sets train_idx <- trainTestSplit(df3,trainPercent=75,seed=5) train <- df3[train_idx, ] test <- df3[-train_idx, ] # Fit a model of degree 1 fit <- lm(mpg~. ,data=train) rsquared1 <-Rsquared(fit,test,test$mpg)
sprintf("R-squared for Polynomial regression of degree 1 (auto_mpg.csv)  is : %f", rsquared1)
## [1] "R-squared for Polynomial regression of degree 1 (auto_mpg.csv)  is : 0.763607"
# Polynomial degree 2 - Quadratic
x = as.matrix(df3[1:6])
# Make a  polynomial  of degree 2 for feature variables before split
df4=as.data.frame(poly(x,2,raw=TRUE))
df5 <- cbind(df4,df3[7])

# Split into train and test set
train_idx <- trainTestSplit(df5,trainPercent=75,seed=5)
train <- df5[train_idx, ]
test <- df5[-train_idx, ]

fit <- lm(mpg~. ,data=train)
# Compute R squared
rsquared2=Rsquared(fit,test,test$mpg) sprintf("R-squared for Polynomial regression of degree 2 (auto_mpg.csv) is : %f", rsquared2) ## [1] "R-squared for Polynomial regression of degree 2 (auto_mpg.csv) is : 0.831372" #Polynomial degree 3 x = as.matrix(df3[1:6]) # Make polynomial of degree 4 of feature variables before split df4=as.data.frame(poly(x,3,raw=TRUE)) df5 <- cbind(df4,df3[7]) train_idx <- trainTestSplit(df5,trainPercent=75,seed=5) train <- df5[train_idx, ] test <- df5[-train_idx, ] # Fit a model of degree 3 fit <- lm(mpg~. ,data=train) # Compute R squared rsquared3=Rsquared(fit,test,test$mpg)
sprintf("R-squared for Polynomial regression of degree 2 (auto_mpg.csv)  is : %f", rsquared3)
## [1] "R-squared for Polynomial regression of degree 2 (auto_mpg.csv)  is : 0.773225"
df=data.frame(degree=c(1,2,3),Rsquared=c(rsquared1,rsquared2,rsquared3))
# Make a plot of Rsquared and degree
ggplot(df,aes(x=degree,y=Rsquared)) +geom_point() + geom_line(color="blue") +
ggtitle("Polynomial regression - R squared vs Degree of polynomial") +
xlab("Degree") + ylab("R squared")

## 1.3a Polynomial Regression – Python

For Polynomial regression , polynomials of degree 1,2 & 3 are used and R squared is computed. It can be seen that the quadaratic model provides the best R squared score and hence the best fit

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
autoDF.shape
autoDF.columns
# Select key columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
# Convert columns to numeric
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
# Drop NAs
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['cylinder','displacement','horsepower','weight','acceleration','year']]
y=autoDF3['mpg']

# Polynomial degree 1
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state = 0)
linreg = LinearRegression().fit(X_train, y_train)
print('R-squared score - Polynomial degree 1 (training): {:.3f}'.format(linreg.score(X_train, y_train)))
# Compute R squared
rsquared1 =linreg.score(X_test, y_test)
print('R-squared score - Polynomial degree 1 (test): {:.3f}'.format(linreg.score(X_test, y_test)))

# Polynomial degree 2
poly = PolynomialFeatures(degree=2)
X_poly = poly.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_poly, y,random_state = 0)
linreg = LinearRegression().fit(X_train, y_train)

# Compute R squared
print('R-squared score - Polynomial degree 2 (training): {:.3f}'.format(linreg.score(X_train, y_train)))
rsquared2 =linreg.score(X_test, y_test)
print('R-squared score - Polynomial degree 2 (test): {:.3f}\n'.format(linreg.score(X_test, y_test)))

#Polynomial degree 3

poly = PolynomialFeatures(degree=3)
X_poly = poly.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_poly, y,random_state = 0)
linreg = LinearRegression().fit(X_train, y_train)
print('(R-squared score -Polynomial degree 3  (training): {:.3f}'
.format(linreg.score(X_train, y_train)))
# Compute R squared
rsquared3 =linreg.score(X_test, y_test)
print('R-squared score Polynomial degree 3 (test): {:.3f}\n'.format(linreg.score(X_test, y_test)))
degree=[1,2,3]
rsquared =[rsquared1,rsquared2,rsquared3]
fig2=plt.plot(degree,rsquared)
fig2=plt.title("Polynomial regression - R squared vs Degree of polynomial")
fig2=plt.xlabel("Degree")
fig2=plt.ylabel("R squared")
fig2.figure.savefig('foo2.png', bbox_inches='tight')
print "Finished plotting and saving"

## R-squared score - Polynomial degree 1 (training): 0.811
## R-squared score - Polynomial degree 1 (test): 0.799
## R-squared score - Polynomial degree 2 (training): 0.861
## R-squared score - Polynomial degree 2 (test): 0.847
##
## (R-squared score -Polynomial degree 3  (training): 0.933
## R-squared score Polynomial degree 3 (test): 0.710
##
## Finished plotting and saving

## 1.4 K Nearest Neighbors

The code below implements KNN Regression both for R and Python. This is done for different neighbors. The R squared is computed in each case. This is repeated after performing feature scaling. It can be seen the model fit is much better after feature scaling. Normalization refers to

$X_{normalized} = \frac{X-min(X)}{max(X-min(X))}$

Another technique that is used is Standardization which is

$X_{standardized} = \frac{X-mean(X)}{sd(X)}$

## 1.4a K Nearest Neighbors Regression – R( Unnormalized)

The R code below does not use feature scaling

# KNN regression requires the FNN package
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
df2 <- df1 %>% select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
df3 <- df2[complete.cases(df2),]

# Split train and test
train_idx <- trainTestSplit(df3,trainPercent=75,seed=5)
train <- df3[train_idx, ]
test <- df3[-train_idx, ]
#  Select the feature variables
train.X=train[,1:6]
# Set the target for training
train.Y=train[,7]
# Do the same for test set
test.X=test[,1:6]
test.Y=test[,7]

rsquared <- NULL
# Create a list of neighbors
neighbors <-c(1,2,4,8,10,14)
for(i in seq_along(neighbors)){
# Perform a KNN regression fit
knn=knn.reg(train.X,test.X,train.Y,k=neighbors[i])
# Compute R sqaured
rsquared[i]=knnRSquared(knn$pred,test.Y) } # Make a dataframe for plotting df <- data.frame(neighbors,Rsquared=rsquared) # Plot the number of neighors vs the R squared ggplot(df,aes(x=neighbors,y=Rsquared)) + geom_point() +geom_line(color="blue") + xlab("Number of neighbors") + ylab("R squared") + ggtitle("KNN regression - R squared vs Number of Neighors (Unnormalized)") ## 1.4b K Nearest Neighbors Regression – Python( Unnormalized) The Python code below does not use feature scaling import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures from sklearn.neighbors import KNeighborsRegressor autoDF =pd.read_csv("auto_mpg.csv",encoding="ISO-8859-1") autoDF.shape autoDF.columns autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']] autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce') autoDF3=autoDF2.dropna() autoDF3.shape X=autoDF3[['cylinder','displacement','horsepower','weight','acceleration','year']] y=autoDF3['mpg'] # Perform a train/test split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) # Create a list of neighbors rsquared=[] neighbors=[1,2,4,8,10,14] for i in neighbors: # Fit a KNN model knnreg = KNeighborsRegressor(n_neighbors = i).fit(X_train, y_train) # Compute R squared rsquared.append(knnreg.score(X_test, y_test)) print('R-squared test score: {:.3f}' .format(knnreg.score(X_test, y_test))) # Plot the number of neighors vs the R squared fig3=plt.plot(neighbors,rsquared) fig3=plt.title("KNN regression - R squared vs Number of neighbors(Unnormalized)") fig3=plt.xlabel("Neighbors") fig3=plt.ylabel("R squared") fig3.figure.savefig('foo3.png', bbox_inches='tight') print "Finished plotting and saving" ## R-squared test score: 0.527 ## R-squared test score: 0.678 ## R-squared test score: 0.707 ## R-squared test score: 0.684 ## R-squared test score: 0.683 ## R-squared test score: 0.670 ## Finished plotting and saving ## 1.4c K Nearest Neighbors Regression – R( Normalized) It can be seen that R squared improves when the features are normalized. df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI df1 <- as.data.frame(sapply(df,as.numeric)) df2 <- df1 %>% select(cylinder,displacement, horsepower,weight, acceleration, year,mpg) df3 <- df2[complete.cases(df2),] # Perform MinMaxScaling of feature variables train.X.scaled=MinMaxScaler(train.X) test.X.scaled=MinMaxScaler(test.X) # Create a list of neighbors rsquared <- NULL neighbors <-c(1,2,4,6,8,10,12,15,20,25,30) for(i in seq_along(neighbors)){ # Fit a KNN model knn=knn.reg(train.X.scaled,test.X.scaled,train.Y,k=i) # Compute R ssquared rsquared[i]=knnRSquared(knn$pred,test.Y)

}

df <- data.frame(neighbors,Rsquared=rsquared)
# Plot the number of neighors vs the R squared
ggplot(df,aes(x=neighbors,y=Rsquared)) + geom_point() +geom_line(color="blue") +
xlab("Number of neighbors") + ylab("R squared") +
ggtitle("KNN regression - R squared vs Number of Neighors(Normalized)")

## 1.4d K Nearest Neighbors Regression – Python( Normalized)

R squared improves when the features are normalized with MinMaxScaling

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import MinMaxScaler
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['cylinder','displacement','horsepower','weight','acceleration','year']]
y=autoDF3['mpg']

# Perform a train/ test  split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# Use MinMaxScaling
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
# Apply scaling on test set
X_test_scaled = scaler.transform(X_test)

# Create a list of neighbors
rsquared=[]
neighbors=[1,2,4,6,8,10,12,15,20,25,30]
for i in neighbors:
# Fit a KNN model
knnreg = KNeighborsRegressor(n_neighbors = i).fit(X_train_scaled, y_train)
# Compute R squared
rsquared.append(knnreg.score(X_test_scaled, y_test))
print('R-squared test score: {:.3f}'
.format(knnreg.score(X_test_scaled, y_test)))

# Plot the number of neighors vs the R squared
fig4=plt.plot(neighbors,rsquared)
fig4=plt.title("KNN regression - R squared vs Number of neighbors(Normalized)")
fig4=plt.xlabel("Neighbors")
fig4=plt.ylabel("R squared")
fig4.figure.savefig('foo4.png', bbox_inches='tight')
print "Finished plotting and saving"
## R-squared test score: 0.703
## R-squared test score: 0.810
## R-squared test score: 0.830
## R-squared test score: 0.838
## R-squared test score: 0.834
## R-squared test score: 0.828
## R-squared test score: 0.827
## R-squared test score: 0.826
## R-squared test score: 0.816
## R-squared test score: 0.815
## R-squared test score: 0.809
## Finished plotting and saving

# Conclusion

In this initial post I cover the regression models when the output is continous. I intend to touch upon other Machine Learning algorithms.
Comments, suggestions and corrections are welcome.

Watch this this space!

To be continued….

To see all posts see Index of posts