Masters of Spin: Unraveling the web with R


Here is a look at some of the masters of spin bowling in cricket. Specifically this post analyzes 3 giants of spin bowling in recent times, namely Shane Warne of Australia, Muthiah Muralitharan of Sri Lanka and our very own Anil Kumble of India.  As to “who is the best leggie” has been a hot topic in cricket in recent years.  As in my earlier post “Analyzing cricket’s batting legends: Through the mirage with R”, I was not interested in gross statistics like most wickets taken.

In this post I try to analyze how each bowler has performed over his entire test career. All bowlers have bowled around ~240 innings. All  other things being equal, it does take a sense to look a little deeper into what their performance numbers reveal about them. As in my earlier posts the data has been taken from ESPN CricInfo’s Statguru

I have chosen these 3 spinners for the following reasons

Shane Warne : Clearly a deadly spinner who can turn the ball at absurd angles
Muthiah Muralitharan : While controversy dogged Muralitharan he was virtually unplayable on many cricketing venues
Anil Kumble: A master spinner whose chess like strategy usually outwitted the best of batsmen.

The King of Spin according to my analysis below is clearly Muthiah Muralitharan. This is clearly shown in the final charts where the performances of bowlers are plotted on a single graph. Muralitharan is clearly a much more lethal bowler and has a higher strike rate. In addition Muralitharan has the lowest mean economy rate amongst the 3 for wickets in the range 3 to 7.  Feel free to add your own thoughts, comments and dissent.

The code for this implementation is available at GitHub at mastersOfSpin. Feel free to clone,fork or hack the code to your own needs. You should be able to use the code as-is on other bowlers with little or no modification

So here goes

Wickets frequency percentage vs Wickets plot
For this plot I determine how frequently the bowler takes ‘n’ wickets in his career and calculate the percentage over his entire career.  In other words this is done as follows in R

# Create a table of Wickets vs the frequency of the wickts
colnames(wktsDF) <- c("Wickets","Freq")
# Calculate wickets percentage
wktsDF$freqPercent <- (wktsDF$Freq/sum(wktsDF$Freq)) * 100

and plot this as a graph.

This is shown for Warne below
1) Shane Warne –  Wickets Frequency percentage vs Wickets plot

warne-wkts-1

Wickets – Mean Economy rate chart
This chart plots the mean economy rate for ‘n’ wickets for the bowler. As an example to do this for 3 wickets for Shane Warne, a list is created of economy rates when Warne has taken  3 wickets in his entire career. The average of this list is then computed and stored against Warne’s 3 wickets. This is done for all wickets taken in Warne’s career. The R snippet for this implementation is shown below

econRate <- NULL
for (i in 0: max(as.numeric(as.character(bowler$Wkts)))) {
# Create a vector of Economy rate  for number of wickets 'i'
a <- bowler[bowler$Wkts == i,]$Econ
b <- as.numeric(as.character(a))
# Compute the mean economy rate by using lapply on the list
econRate[i+1] <- lapply(list(b),mean)
print(econRate[i])
}

Shane Warne –  Wickets vs Mean Economy rate
This plot for Shane Warne is shown below

warne-er-1

The plots for M Muralithan and Anil Kumble are included below

2) M Muralitharan – Wickets Frequency percentage vs Wickets plot
murali-wkts

M Muralitharan – Wickets vs Mean Economy rate

murali-er

3) Anil Kumble – Wickets Frequency percentage vs Wickets plot
kumble-wkts

Anil Kumble – Wickets vs Mean Economy rate
kumble-er

Finally the relative performance of the bowlers is generated by creating a single chart where the wicket frequencies and the mean economy rate vs wickets is plotted.

This is shown below

Relative wicket percentages
relative-wkts-pct-1

Relative mean economy rate
relative-er-1

As can be seen in the above 2 charts M Muralidharan not only has a higher strike rate as far as wickets in 3 to 7 range, he also has a much lower mean economy rate

You can clone/fork the R code from GitHub at mastersOfSpin

Conclusion: The performance of Muthiah Muralitharan is clearly superior to both Shane Warne and Kumble. In my opinion the king of spin is M Muralitharan, followed by Shane Warne and finally Anil Kumble

Feel free to dispute my claims. Comments, suggestions are more than welcome

Also see

1. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
2. Informed choices through Machine Learning-2: Pitting together Kumble, Kapil, Chandra
3. Analyzing cricket’s batting legends – Through the mirage with R

You may also like
1. A peek into literacy in India:Statistical learning with R
2. A crime map of India in R: Crimes against women
3.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
4.  Bend it like Bluemix, MongoDB with autoscaling – Part 2

Analyzing cricket’s batting legends – Through the mirage with R


In this post I do a deep dive into the records of the all-time batting legends of cricket to identify interesting information about their achievements. In my opinion, the usual currency for batsman’s performance like most number of centuries or highest batting average are too gross in their significance. I wanted something finer where we can pin-point specific strengths of different  players

This post will answer the following questions.
– How many times has a batsman scored runs in a specific range say 20-40 or 80-100 and so on?
– How do different batsmen compare against each other?
– Which of the batsmen stayed well beyond their sell-by date?
– Which of the batsmen retired too soon?
– What is the propensity for a batsman to get caught, bowled run out etc?

For this analysis I have chosen the batsmen below for the following reasons
Sir Don Bradman : With a  batting average of 99.94 Bradman was an obvious choice
Sunil Gavaskar is one of India’s batting icons who amassed 774 runs in his debut against the formidable West Indies in West Indies
Brian Lara : A West Indian batting hero who has double, triple and quadruple centuries under his belt
Sachin Tendulkar: A prolific run getter, India’s idol, who holds the record for most test centuries by any batsman (51 centuries)
Ricky Ponting:A dangerous batsman against any bowling attack and who can demolish any bowler on his day
Rahul Dravid: He was India’s most dependable batsman who could weather any storm in a match single-handedly
AB De Villiers : The destructive South African batsman who can pulverize any attack when he gets going

The analysis has been performed on these batsmen on various parameters. Clearly different batsmen have shone in different batting aspects. The analysis focuses on each of these to see how the different players stack up against each other.

The data for the above batsmen has been taken from ESPN Cricinfo. Only the batting statistics of the above batsmen in Test cricket has been taken. The implementation for this analysis has been done using the R language.  The R implementation, datasets and the plots can be accessed at GitHub at analyze-batting-legends. Feel free to fork or clone the code. You should be able to use the code with minor modifications on other players. Also go ahead make your own modifications and hack away!

Key insights from my analysis below
a) Sir Don Bradman’s unmatchable record of 99.94 test average with several centuries, double and triple centuries makes him the gold standard of test batting as seen in the ‘All-time best batsman below’
b) Sunil Gavaskar is the king of batting in India, followed by Rahul Dravid and finally Sachin Tendulkar. See the charts below for details
c) Sunil Gavaskar, AB De Villiers and Rahul Dravid had at least 2 more years of good test cricket in them. Their retirement was premature. This is based on the individual batsmen’s career graph (moving average below)
d) Brian Lara, Sachin Tendulkar, Ricky Ponting, Vivian Richards retired at a time when their batting was clearly declining. The writing on the wall was clear and they had to go (see moving average below)
e) The biggest hitter of 4’s was Vivian Richards. In the 2nd place is Brian Lara. Tendulkar & Dravid follow behind. Dravid is a surprise as he has the image of a defender.
e) While Sir Don Bradman made huge scores, the number of 4’s in his innings was significantly less. This could be because the ground in those days did not carry the ball far enough
f) With respect to dismissals  Richards was able to keep his wicket intact (11%) of the times , followed by Ponting  Tendulkar, De Villiers, Dravid (10%) who carried the bat, and Gavaskar & Bradman (7%)

A) Runs frequency table and charts
These plots normalize the batting performance of different batsman, since the number of innings played ranges from 89 (Bradman) to 348 (Tendulkar), by calculating the percentage frequency the batsman scores runs in a particular range.   For e.g. Sunil Gavaskar made scores between 60-80 10% of his total innings

This is shown in a tabular form below

runs-frequency
The individual charts for each of the players are shwon belowThe top performers after  removing ranges 0-20 & 20-40 are
Between 40-60 runs – 1) Ricky Ponting (16.4%) 2) Brian lara (15.8%) 3) AB De Villiers (14.6%)
Between 60-80 runs – 1) Vivian Richards (18%) 2) AB De Villiers (10.2%) 3) Sunil Gavaskar (10%)
Between 80-100 runs – 1) Rahul Dravid (7.6%) 2) Brian Lara (7.4%) 3) AB De Villiers (6.4%)
Between 100 -120 runs – 1) Sunil Gavaskar (7.5%) 2) Sir Don Bradman (6.8%) 3) Vivian Richards (5.8%)
Between 120-140 runs – 1) Sir Don Bradman (6.8%) 2) Sachin Tendulkar (2.5%) 3) Vivian Richards (2.3%)

The percentage frequency for Brian Lara is included below
1) Brian Lara
lara-run-freq

The above chart shows out of the total number of innings played by Brian Lara he scored runs in the range (40-60) 16% percent of the time. The chart also shows that Lara scored between 0-20, 40%  while also scoring in the ranges 360-380 & 380-400 around 1%.
The same chart is displayed as continuous graph below
lara-run-perf

The run frequency charts for other batsman are
2) Sir Don Bradman
a) Run frequency
bradman-freq
Note: Notice the significant contributions by Sir Don Bradman in the ranges 120-140,140-160,220-240,all the way up to 340
b) Performance
bradman-perf
3) Sunil Gavaskar
a) Runs frequency chart
gavaskar-freq
b) Performance chart
gavaskar-perf
4) Sachin Tendulkar
a) Runs frequency chart
tendulkar-freq
b) Performance chart
tendulkar-perf
5) Ricky Ponting
a) Runs frequency
ponting-freq
b) Performance
ponting-perf
6) Rahul Dravid
a) Runs frequency chart
dravid-freq
b) Performance chart
dravid-perf
7) Vivian Richards
a) Runs frequency chart
richards-freq
b) Performance chart
richards-perf
8) AB De Villiers
a) Runs frequency chart
villiers-freq
b)  Performance chart
villier-perf

 B) Relative performance of the players
In this section I try to measure the relative performance of the players by superimposing the performance graphs obtained above.  You may say that “comparisons are odious!”. But equally odious are myths that are based on gross facts like highest runs, average or most number of centuries.
a) All-time best batsman
(Sir Don Bradman, Sunil Gavaskar, Vivian Richards, Sachin Tendulkar, Ricky Ponting, Brian Lara, Rahul Dravid, AB De Villiers)
overall-batting-perf
From the above chart it is clear that Sir Don Bradman is the ‘gold’ standard in batting. He is well above others for run ranges above 100 – 350
b) Best Indian batsman (Sunil Gavaskar, Sachin Tendulkar, Rahul Dravid)
srt-sg-dravid-perf
The above chart shows that Gavaskar is ahead of the other two for key ranges between 100 – 130 with almost 8% contribution of total runs. This followed by Dravid who is ahead of Tendulkar in the range 80-120. According to me the all time best Indian batsman is 1) Sunil Gavaskar 2) Rahul Dravid 3) Sachin Tendulkar

c) Best batsman -( Brian Lara, Ricky Ponting, Sachin Tendulkar, AB De Villiers)
This chart was prepared since this comparison was often made in recent times

rel

This chart shows the following ranking 1) AB De Villiers 2) Sachin Tendulkar 3) Brian Lara/Ricky Ponting
C) Chart of 4’s

fours-batsman
This chart is plotted with a 2nd order curve of the number of  4’s versus the total runs in the innings
1) Brian Lara
bradman-4s
2) Sir Don Bradman
bradman-4s
3) Sunil Gavaskar
gavaskar-4s
4) Sachin Tendulkar
tendulkar-4s
5) Ricky Ponting
ponting-4s
6) Rahul Dravid
dravid-4s
7) Vivian Richards
richards-4s
8) AB De Villiers
villiers-4s
D) Proclivity for type of dismissal
The below charts show how often the batsman was out bowled, caught, run out etc
1) Brian Lara
lara-dismissals
2) Sir Don Bradman
bradman-dismissals
3) Sunil  Gavaskar
gavaskar-dismissals
4) Sachin Tendulkar
tendulkar-dismissals
5) Ricky Ponting
ponting-dismissals
6) Rahul Dravid
dravid-dismissals
7) Vivian Richard
richards-dismissals
8) AB De Villiers
villiers-dismissals
E) Moving Average
The plots below provide the performance of the batsman as a time series (chronological) and is displayed as the continuous gray lines. A moving average is computed using ‘loess regression’ and is shown as the dark line. This dark line represents the players performance improvement or decline. The moving average plots are shown below
1) Brian Lara
lara-ma
2) Sir Don Bradman
bradman-ma
Sir Don Bradman’s moving average shows a remarkably consistent performance over the years. He probably could have a continued for a couple more years
3)Sunil Gavaskar

2

Gavaskar moving average does show a good improvement from a dip around 1983. Gavaskar retired bowing to public pressure on a mistaken belief that he was under performing. Gavaskar could have a continued for a couple of more years
4) Sachin Tendulkar

1

Tendulkar’s performance is clearly on the decline from 2011.  He could have announced his retirement at least 2 years prior
5) Ricky Ponting
ponting-ma
Ponting peak performance was around 2005 and does go steeply downward from then on. Ponting could have also retired around 2012
6) Rahul Dravid

1

Dravid seems to have recovered very effectively from his poor for around 2009. His overall performance shows steady improvement. Dravid’s announcement appeared impulsive. Dravid had another 2 good years of test cricket in him
7) Vivian Richards
richards-ma
Richard’s performance seems to have dropped around 1984 and seems to remain that way.
8) AB De Villiers
villiers-ma
AB De Villiers moving average shows a steady upward swing from 2009 onwards. De Villiers had at least 3-4 years of great test cricket ahead of him. Personally I feel he would shattered a few batting records. It is real pity he decided to hang up his boots

Finally as mentioned above the dataset, the R implementation and all the charts are available at GitHub at analyze-batting-legends. Feel free to fork and clone the code. The code should work for other batsman as-is. Also go ahead and make any modifications for obtaining further insights.

Conclusion: The batting legends have been analyzed from various angles namely i)  What is the frequency of runs scored in a particular range ii) How each batsman compares with others for relative runs in a specified range iii) How does the batsman get out?  iv) What were the peak and lean period of the batsman and whether they recovered or slumped from these periods.  While the batsman themselves have played in different time periods I think in an overall sense the performance under the conditions of the time will be similar.
Anyway feel free to let me know your thoughts. If you see other patterns in the data also do drop in your comment.

You may also like
1. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
2. Informed choices through Machine Learning-2: Pitting together Kumble, Kapil,

Also see
– A crime map of India in R – Crimes against women
– What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
– Bend it like Bluemix, MongoDB with autoscaling – Part 1

R incantations for the uninitiated


Here are some basic R incantations that will get you started with R

A) Scalars & Vectors:
Chant 1 – Now repeat after me, with your right hand forward at shoulder height “In R there are no scalars. There are only vectors of length 1″.
Just kidding:-)

To create an integer variable x with a value 5 we write
x <- 5 or
x = 5

While the former notation may seem odd, it is actually more logical considering that the RHS is assigned to LHS. Anyway both seem to work
Vectors can be created as follows
a <- c( 2:10)
b <- c("This", "is", 'R","language")

B) Sequences:
There are several ways of creating sequences of numbers which you intend to use for your computation
<- seq(5:25) # Sequence from 5 to 25

Other ways to create sequences
Increment by 2
> seq(5, 25, by=2)
[1]  5  7  9 11 13 15 17 19 21 23 25

>seq(5,25,length=18) # Create sequence from 5 to 25 with a total length of 18
[1]  5.000000  6.176471  7.352941  8.529412  9.705882 10.882353 12.058824 13.235294
[9] 14.411765 15.588235 16.764706 17.941176 19.117647 20.294118 21.470588 22.647059
[17] 23.823529 25.000000

C) Conditions and loops
An if-else if-else construct goes like this
if(condition) {
do something
} else if (condition) {
do something
} else {
do something
}

Note: Make sure the statements appear as above with the else if and else appearing on the same line as the closing braces, otherwise R complains about ‘unexpected else’ in else statement

D) Loops
I would like to mention 2 ways of doing ‘for’ loops  in R.
a) for (i in 1:10) {
statement
}

> a <- seq(5,25,length=10)
> a
[1]  5.000000  7.222222  9.444444 11.666667 13.888889 16.111111 18.333333
[8] 20.555556 22.777778 25.000000

b) Sequence along the vector sequence. Note: This is useful as we don’t have to know  the length of the vector/sequence
for (i in seq_along(a)){
+   print(a[i])
+ }

[1] 5
[1] 7.222222
[1] 9.444444
[1] 11.66667

There are others ways of looping with ‘while’ and ‘repeat’ which I have not included in this post.

R makes manipulation of matrices and data frames really easy. All the elements in a matrix are numeric while data frames can have different types for each of the element

E) Matrix
> rnorm(12,5,2)
[1] 2.699961 3.160208 5.087478 3.969129 3.317840 4.551565 2.585758 2.397780
[9] 5.297535 6.574757 7.468268 2.440835

a) Create a vector of 12 random numbers with a mean of 5 and SD of 2
> a <-rnorm(12,5,2)
b) Convert the vector to a matrix with 4 rows and 3 columns
> mat <- matrix(a,4,3)
> mat[,1]     [,2]     [,3]
[1,] 5.197010 3.839281 9.022818
[2,] 4.053590 5.321399 5.587495
[3,] 4.225763 4.873768 6.648151
[4,] 4.709784 4.129093 2.575523

c) Subset rows 1 & 2 from the matrix
> mat[1:2,]
[,1]     [,2]     [,3]
[1,] 5.19701 3.839281 9.022818
[2,] 4.05359 5.321399 5.587495

d) Subset matrix a rows 1& 2 and with columns 2 & 3
> mat[1:2,2:3]
[,1]     [,2]
[1,] 3.839281 9.022818
[2,] 5.321399 5.587495

e) Subset matrix a for all row elements for the column 3
> mat[,3]
[1] 9.022818 5.587495 6.648151 2.575523

e) Add row names and column names for the matrix as follows
> names <- c(“tim”,”pat”,”joe”,”jim”)
> v <- data.frame(names,mat)
> v
names       X1       X2       X3
1   tim 5.197010 3.839281 9.022818
2   pat 4.053590 5.321399 5.587495
3   joe 4.225763 4.873768 6.648151
4   jim 4.709784 4.129093 2.575523

> colnames(v) <- c("names","a","b","c")
> v
names        a        b        c
1   tim 5.197010 3.839281 9.022818
2   pat 4.053590 5.321399 5.587495
3   joe 4.225763 4.873768 6.648151
4   jim 4.709784 4.129093 2.575523

F) Data Frames
In R data frames are the most important method to manipulate large amounts of data. One can read data in .csv format into data frame using
df <- read.csv(“mydata.csv”)
To get a feel of data frames it is useful to play around with the numerous data sets that are available with the installation of R
To check the available dataframes do
>data()
AirPassengers                    Monthly Airline Passenger Numbers 1949-1960
BJsales                          Sales Data with Leading Indicator
BJsales.lead (BJsales)           Sales Data with Leading Indicator
BOD                              Biochemical Oxygen Demand
CO2                              Carbon Dioxide Uptake in Grass Plants
ChickWeight                      Weight versus age of chicks on different diets
...

I will be using the mtcars data frame. Here are some of the most important commands on data frames
a) load data from mtcars
data(mtcars)
b) > head(mtcars,3) # Display the top 3 rows of the data frame
mpg cyl disp  hp drat    wt  qsec vs am gear carb
Mazda RX4     21.0   6  160 110 3.90 2.620 16.46  0  1    4    4
Mazda RX4 Wag 21.0   6  160 110 3.90 2.875 17.02  0  1    4    4
Datsun 710    22.8   4  108  93 3.85 2.320 18.61  1  1    4    1

c) > tail(mtcars,4) # Display the boittom 4 rows of the data frame
mpg cyl disp  hp drat   wt qsec vs am gear carb
Ford Pantera L 15.8   8  351 264 4.22 3.17 14.5  0  1    5    4
Ferrari Dino   19.7   6  145 175 3.62 2.77 15.5  0  1    5    6
Maserati Bora  15.0   8  301 335 3.54 3.57 14.6  0  1    5    8
Volvo 142E     21.4   4  121 109 4.11 2.78 18.6  1  1    4    2

d) > names(mtcars)  # Display the names of the columns of the data frame
[1] "mpg"  "cyl"  "disp" "hp"   "drat" "wt"   "qsec" "vs"   "am"   "gear" "carb"

e) > summary(mtcars) # Display the summary of the data frame
mpg             cyl             disp             hp             drat             wt
Min.   :10.40   Min.   :4.000   Min.   : 71.1   Min.   : 52.0   Min.   :2.760   Min.   :1.513
1st Qu.:15.43   1st Qu.:4.000   1st Qu.:120.8   1st Qu.: 96.5   1st Qu.:3.080   1st Qu.:2.581
Median :19.20   Median :6.000   Median :196.3   Median :123.0   Median :3.695   Median :3.325
Mean   :20.09   Mean   :6.188   Mean   :230.7   Mean   :146.7   Mean   :3.597   Mean   :3.217
3rd Qu.:22.80   3rd Qu.:8.000   3rd Qu.:326.0   3rd Qu.:180.0   3rd Qu.:3.920   3rd Qu.:3.610
Max.   :33.90   Max.   :8.000   Max.   :472.0   Max.   :335.0   Max.   :4.930   Max.   :5.424
qsec             vs               am              gear            carb
Min.   :14.50   Min.   :0.0000   Min.   :0.0000   Min.   :3.000   Min.   :1.000
1st Qu.:16.89   1st Qu.:0.0000   1st Qu.:0.0000   1st Qu.:3.000   1st Qu.:2.000
Median :17.71   Median :0.0000   Median :0.0000   Median :4.000   Median :2.000
Mean   :17.85   Mean   :0.4375   Mean   :0.4062   Mean   :3.688   Mean   :2.812
3rd Qu.:18.90   3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:4.000   3rd Qu.:4.000
Max.   :22.90   Max.   :1.0000   Max.   :1.0000   Max.   :5.000   Max.   :8.000

f) > str(mtcars) # Generate a concise description of the data frame - values in each column, factors
'data.frame':   32 obs. of  11 variables:
$ mpg : num  21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
$ cyl : num  6 6 4 6 8 6 8 4 4 6 ...
$ disp: num  160 160 108 258 360 ...
$ hp  : num  110 110 93 110 175 105 245 62 95 123 ...
$ drat: num  3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...
$ wt  : num  2.62 2.88 2.32 3.21 3.44 ...
$ qsec: num  16.5 17 18.6 19.4 17 ...
$ vs  : num  0 0 1 1 0 1 0 1 1 1 ...
$ am  : num  1 1 1 0 0 0 0 0 0 0 ...
$ gear: num  4 4 4 3 3 3 3 4 4 4 ...
$ carb: num  4 4 1 1 2 1 4 2 2 4 ...

g) > mtcars[mtcars$mpg == 10.4,] #Select all rows in mtcars where the mpg column has a value 10.4
mpg cyl disp  hp drat    wt  qsec vs am gear carb
Cadillac Fleetwood  10.4   8  472 205 2.93 5.250 17.98  0  0    3    4
Lincoln Continental 10.4   8  460 215 3.00 5.424 17.82  0  0    3    4

h) > mtcars[(mtcars$mpg >20) & (mtcars$mpg <24),] # Select all rows in mtcars where the mpg > 20 and mpg < 24
mpg cyl  disp  hp drat    wt  qsec vs am gear carb
Mazda RX4      21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
Mazda RX4 Wag  21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
Datsun 710     22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
Hornet 4 Drive 21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
Merc 230       22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
Toyota Corona  21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1
Volvo 142E     21.4   4 121.0 109 4.11 2.780 18.60  1  1    4    2

i) > myset <- mtcars[(mtcars$cyl == 6) | (mtcars$cyl == 4),] # Get all calls which are either 4 or 6 cylinder
> myset
mpg cyl  disp  hp drat    wt  qsec vs am gear carb
Mazda RX4      21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
Mazda RX4 Wag  21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
Datsun 710     22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
Hornet 4 Drive 21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
Valiant        18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
Merc 240D      24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2…

j) > mean(myset$mpg) # Determine the mean of the set created above
[1] 23.97222

k) > table(mtcars$cyl) #Create a table of cars which have 4,6, or 8 cylinders

4  6  8
11  7 14

G) lapply,sapply,tapply
I use the iris data set for these commands
a) > data(iris) #Load iris data set

b) > names(iris)  #Show the column names of the data set
[1] "Sepal.Length" "Sepal.Width"  "Petal.Length" "Petal.Width"  "Species"
c) > lapply(iris,class) #Show the class of all the columns in iris
$Sepal.Length
[1] "numeric"
$Sepal.Width
[1] "numeric"
$Petal.Length
[1] "numeric"
$Petal.Width
[1] "numeric"
$Species
[1] "factor"

d) > sapply(iris,class) # Display a summary of the class of the iris data set
Sepal.Length  Sepal.Width Petal.Length  Petal.Width      Species
"numeric"    "numeric"    "numeric"    "numeric"     "factor"

e) tapply: Instead of getting the mean for each of the species as below we can use tapply
> a <-iris[iris$Species == "setosa",]
> mean(a$Sepal.Length)
[1] 5.006
> b <-iris[iris$Species == "versicolor",]
> mean(b$Sepal.Length)
[1] 5.936
> c <-iris[iris$Species == "virginica",]
> mean(c$Sepal.Length)
[1] 6.588

> tapply(iris$Sepal.Length,iris$Species,mean)
setosa versicolor  virginica
5.006      5.936      6.588

Hopefully this highly condensed version of R will set you on a R-oll.

You may like
– A peek into literacy in India:Statistical learning with R
– A crime map of India in R: Crimes against women
– Analyzing cricket’s batting legends – Through the mirage with R

Programming Zen and now – Some essential tips-2


This post is a follow-up to my earlier post – How to program – Some essential tips. In this post I expand on some of the ideas of my earlier post.

Programming means different things to different people. To some programming is a drudgery almost akin to manual labor, to others programming is an insurmountable mountain full of frustrations and disappointments while to others it is an intense problem solving and a creative activity. In my opinion programming can mean anything to you. It is your attitude towards coding that make it a chore, a daunting task or something really creative.

Here are some my insights on how to go about learning to code

Eyes wide open:  People generally get frustrated when a piece of code that they wrote does not do what they intended it to do. In some cases the code snippet will do nothing when they were expecting final result, sometimes the code will crash or it will go into an infinite loop and drive the person nuts. (Let me assure you – I have been there, done that!) The usual reaction when this happens is anger and frustration where we generally tinker around with the code only to get the same result. Soon the emotions will progress from anger to hopelessness.

The first thing that one needs to while coding is to keep your ‘eyes wide open’. We tend to be  guilty of ignoring the error messages that show up. Here one way to attack coding

a) Fully understand the ‘what’ of the problem. If there is an infinite loop or a core dump check after which point does it happen? If there is an execution error, what is the error trying to tell us?
b) Next look into ‘why’  the error occurred.  You could either use debugger or insert appropriate print statements to take the offending code apart.
c) Thirdly think ‘how‘ you can address the situation. Make appropriate changes and re-run the code
d) Did it solve the issue.If yes, move forward. Otherwise go to step a)

Remember that we learn more from our programming mistakes more than when our code just ‘happens’ to work!  Mistakes in our code make us to explain every part of the program

Changing times:

Times have changed. Programming Zen and programming now are worlds apart. In many ways, IDEs, Git, Google etc. have made the programmer’s life a lot easier

‘Git’ing from here to there:  Here is a trick that I learnt fairly recently, though it should have occurred to me more than 2 years back. This is using Git judiciously for all programming tasks (Note:  I am saying nothing new here!).  I find it really useful in writing code with incremental changes.  I create my initial code on the master and then test out incremental changes on a ‘new branch’ even for personal projects. Once I have proved a small increment works, I merge it with the ‘main’ branch. I again start working on the ‘new’ for the next incremental change followed by a merge to the master

The steps are

Make initial changes

1. git add  .
2. git commit –m “ Initial changes’

Create a new branch
3. git checkout –b ‘new

Make incremental changes. Test.
4.git add  .
5. git commit –m “Change 1″

Merge with the master
6.git checkout master
7. git merge new

Continue to work with ‘new’.
8 . git checkout new
9. Go to step 4)

This process can be continued till you get your final product. I find this extremely useful instead of just using an IDE to make code changes. Invariably you can run into a situation where you had something working some time back and in the next instant it is broken and you can’t figure out all the changes you made to the working code. This can be extremely frustrating. With Git you have a history of changes and you can switch to an earlier version of working code and start from there.

Rarely do I find a reason to have more than 1 branch

Here is a pictorial version of this

1

 

 

Taking help from Dr. Google: For most questions and errors that you encounter you will find others who have hit similar bugs. Just google it. You will more than surprised that others went down the exact same path that you are treading.  Besides the internet is full of tutorials, blogs and articles on key aspects of programming

Explore the cave of Stack overflow:   Spend time exploring Stack overflow. Stack overflow is replete with code snippets and questions that you wanted to ask. There is so much information out there. If you really don’t find an answer to your problem, post it in Stack overflow and you are bound to get an answer or a link to a similar question asked previously

Finally programming requires dollops of patience. Develop patience along with your skill in coding and soon programming will much more enjoyable to you.

A crime map of India in R – Crimes against women


In this post I take a look at the gory crime scene across India to determine which states are the heavy weights in crimes. Who is the undisputed champion of rapes in a year? Which state excels in cruelty by husbands and the relatives to wives? Which state leads in dowry deaths? To get the answers to these questions I perform analysis of the state-wise crime data against women with the data  from Open Government Data (OGD) Platform India. The dataset  for this analysis was taken for the Crime against Women from OGD.

The data in OGD is available for crimes against women in different states under different ‘crime heads’ like rape, dowry deaths, kidnapping & abduction etc. The data is available for years from 2001 to 2012. This data is plotted as a scatter plot and a linear regression line is then fit on the available data. Based on this linear model,  the projected incidence of crimes likes rapes, dowry deaths, abduction & kidnapping is performed for each of the states. This is then used to build a table of  different crime heads for all the states predicting the number of crimes till the year 2018. Fortunately, R  crunches through the data sets quite easily. The overall projections of crimes against as women is shown below based on the linear regression for each of these states

Projections over the next couple of years
The tables below are based on the projected incidence of crimes under various categories assuming that these states maintain their torrid crime rate. A cursory look at the tables below clearly indicate the Uttar Pradesh is the undisputed heavy weight champion in 4 of 5 categories shown. Maharashtra and Andhra Pradesh take 2nd and 3rd ranks in the total crimes against women and are significant contenders in other categories too.

A) Projected rapes in India
The top 3 heavy weights in projected rapes over the next 5 years are 1) Madhya Pradesh  2) Uttar Pradesh 3) Maharashtra

rapes

Full table: Rape.csv
B) Projected Dowry deaths in India 
dowrydeaths

Full table: Dowry Deaths.csv
C) Kidnapping & Abduction
kidnapping

Full table: Kidnapping&Abduction.csv
D) Cruelty by husband & relatives
cruelty

Full table: Cruelty by husbands_relatives.csv
E) Total crimes against women

total

Full table: Total crimes.csv
Here is a beautiful visualization of ‘Total crimes against women’  created as a choropleth map  by Philip Predruco.

a

The implementation for this analysis was done using the  R language.  The R code, dataset, output and the crime charts can be accessed at GitHub at crime-against-women

Directory structure
– R code
dataset used
output
statewise-crime-charts

The analysis has been completely parametrized. A quick look at the implementation is shown  below. A function state crime was created as given below

statecrime.R
This function (statecrime.R)  does the following
a) Creates a scatter plot for the state for the crime head
b) Computes a best linear regression fir and draws this line
c) Uses the model parameters (coefficients) to compute the projected crime in the years to come
d) Writes the projected values to a text file
c) Creates a directory with the name of the state if it does not exist and stores the jpeg of the plot there.

statecrime <- function(indiacrime, row, state,crime) {
year <- c(2001:2012)
# Make seperate folders for each state
if(!file.exists(state)) {
dir.create(state)
}
setwd(state)
crimeplot <- paste(crime,".jpg")
jpeg(crimeplot)

# Plot the details of the crime
plot(year,thecrime ,pch= 15, col="red", xlab = "Year", ylab= crime, main = atitle,
,xlim=c(2001,2018),ylim=c(ymin,ymax), axes=FALSE)

A linear regression line is fit using ‘lm’

# Fit a linear regression model
lmfit <-lm(thecrime~year)
# Draw the lmfit line
abline(lmfit)

The model parameters are then used to draw the line and also project for the next 5 years from 2013 to 2018

nyears <-c(2013:2018)
nthecrime <- rep(0,length(nyears))
# Projected crime incidents from 2013 to 2018 using a linear regression model
for (i in seq_along(nyears)) {
nthecrime[i] <- lmfit$coefficients[2] * nyears[i] + lmfit$coefficients[1]
}

The projected data for each state is appended into an appropriate file which is then used to display the tables at the top of this post

# Write the projected crime rate in a file
nthecrime <- round(nthecrime,2)
nthecrime <- c(state, nthecrime, "\n")
print(nthecrime)
#write(nthecrime,file=fileconn, ncolumns=9, append=TRUE,sep="\t")
filename <- paste(crime,".txt")
# Write the output in the ./output directory
setwd("./output")
cat(nthecrime, file=filename, sep=",",append=TRUE)

The above function is then repeatedly called for each state for the different crime heads. (Note: It is possible to check the read both the states and crime heads with R and perform the computation repeatedly. However, I have done this the manual way!)

crimereport.R
# 1. Andhra Pradesh
i <- 1
statecrime(indiacrime, i, "Andhra Pradesh","Rape")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Kidnapping& Abduction")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Dowry Deaths")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Assault on Women")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Insult to modesty")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Cruelty by husband_relatives")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Imporation of girls from foreign country")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Immoral traffic act")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Dowry prohibition act")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Indecent representation of Women Act")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Commission of Sati Act")
i <- i+38
statecrime(indiacrime, i, "Andhra Pradesh","Total crimes against women")
...
...

and so on for all the states

Charts for different crimes against women

1) Uttar Pradesh

The plots for  Uttar Pradesh  are shown below

Rapes in UP

Rape

Dowry deaths in UP

Dowry Deaths

Cruelty by husband/relative

Cruelty by husband_relatives

Total crimes against women in Uttar Pradesh

Total crimes against women

You can find more charts in GitHub by clicking Uttar Pradesh

2) Maharashtra : Some of the charts for Maharashtra

Rape

Rape

Kidnapping & Abduction

Kidnapping& Abduction

Total crimes against women in Maharashtra

Total crimes against women

More crime charts  for Maharashtra

Crime charts can be accessed for the following states from GitHub ( in alphabetical order)

3) Andhra Pradesh
4) Arunachal Pradesh
5) Assam
6) Bihar
7) Chattisgarh
8) Delhi (Added as an exception based on its notoriety)
9) Goa
10) Gujarat
11) Haryana
12) Himachal Pradesh
13) Jammu & Kashmir
14) Jharkhand
15) Karnataka
16) Kerala
17) Madhya Pradesh
18) Manipur
19) Meghalaya
20) Mizoram
21) Nagaland
22) Odisha
23) Punjab
24) Rajasthan
25) Sikkim
26) Tamil Nadu
27) Tripura
28) Uttarkhand
29) West Bengal

The code, dataset and the charts can be cloned/forked from GitHub at crime-against-women

Let me know if you find any interesting patterns in the data.
Thoughts, comments welcome!


See also
A peek into literacy in India: Statiscal learning with R

You may also like
– Analyzing cricket’s batting legends – Through the mirage with R
– What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
– Bend it like Bluemix, MongoDB with autoscaling – Part 1

A peek into literacy in India: Statistical Learning with R


In this post I take a peek into the literacy landscape across India as a whole using R language.  The dataset from Open Government Data (OGD) platform India was used for this purpose. This data is based on the 2011 census. The XL sheets for the states were downloaded for data for each state. The Union Territories were not included in the analysis.

A thin slice of the data from each data set was taken from the data for each individual state (Note: This could also have been done from the consolidated india.xls XL sheet which I came to know of, much later).

I calculate the following for age group

Males (%) attending education institutions = (Males attending educational institutions * 100)/ Total males
Females (%) attending education institutions = (Females attending educational institutions * 100)/ Total Females

This is then plotted as a bar chart with the age distribution. I then overlay the national average for each state over the barchart to check whether the literacy in the state is above or below the national average. The implementation in R is included below

The code and data can be forked/cloned from GitHub at india-literacy

The results based on the analysis is given below.

  1. Kerala is clearly the top ranker with the literacy rates for both males and females well above the average
  2. The states with above average literacy are – Kerala, Himachal Pradesh, Uttarakhand, Tamil Nadu, Haryana, Himachal Pradesh, Karnataka, Maharashtra, Punjab, Uttarakhand
  3. The states with just about average literacy – Karnataka, Andhra Pradesh, Chattisgarh, Gujarat, Madhya Pradesh, Odisha, West Bengal
  4. The states with below average literacy – Uttar Pradesh, Bihar, Jharkhand, Arunachal Pradesh, Assam, Jammu and Kashmir, Jharkhand, Rajasthan

 

A brief implementation of the basic code in R is shown bwelow

# Read the Arunachal Pradhesh literacy related data
arunachal = read.csv("arunachal.csv")
# Create as a matrix
arunachalmat = as.matrix(arunachal)
arunachalTotal = arunachalmat[2:19,7:28]
# Take transpose as this is necessary for plotting bar charts
arunachalmat = t(arunachalTotal)
# Set the scipen option to format the y axis (otherwise prints as e^05 etc.)
getOption("scipen")
opt <- options("scipen" = 20)
getOption("scipen")
#Create a vector of total Males & Females
arunachalTotalM = arunachalmat[3,]
arunachalTotalF = arunachalmat[4,]
#Create a vector of males & females attending education institution
arunachalM = arunachalmat[6,]
arunachalF = arunachalmat[7,]
#Calculate percent of males attending education of total
arunachalpercentM = round(as.numeric(arunachalM) *100/as.numeric(arunachalTotalM),1)
barplot(arunachalpercentM,names.arg=arunachalmat[1,],main ="Percentage males attending educational institutions in Arunachal Pradesh",
xlab = "Age", ylab= "Percentage",ylim = c(0,100), col ="lightblue", legend= c("Males"))
points(age,indiapercentM,pch=15)
lines(age,indiapercentM,col="red",pch=20,lty=2,lwd=3)
legend( x="bottomright",
legend=c("National average"),
col=c("red"), bty="n" , lwd=1, lty=c(2),
pch=c(15) )
#Calculate percent of females attending education of total
arunachalpercentF = round(as.numeric(arunachalF) *100/as.numeric(arunachalTotalF),1)
barplot(arunachalpercentF,names.arg=arunachalmat[1,],main ="Percentage females attending educational institutions in Arunachal Pradesh ",
xlab = "Age", ylab= "Percentage", ylim = c(0,100), col ="lightblue", legend= c("Females"))
points(age,indiapercentF,pch=15)
lines(age,indiapercentF,col="red",pch=20,lty=2,lwd=3)
legend( x="bottomright",
legend=c("National average"),
col=c("red"), bty="n" , lwd=1, lty=c(2),
pch=c(15) )

A) Overall plot for India

a) India – Males

india-males

b) India – females

india-females

The plots for each individual state is given below

1) Literacy in Tamil Nadu

Tamil Nadu is slightly over the national average. The women seem to do marginally better than the males

a) Tamil Nadu – males

tn-males

b) Tamil Nadu – females

tn-females

2) Literacy in Uttar Pradesh

UP is slightly below the national average. Women are comparatively below men here

a) Uttar Pradesh – males

UP-males

b) Uttar Pradesh – females

UP-females

3) Literacy in Bihar

Bihar is well below the national average for both men and women

a) Bihar – males

bihar-males

b) Bihar – females

bihar-females

4. Literacy in Kerala

Kerala is the winner all the way in literacy with almost 100% literacy across all age groups

a) Kerala – males


kerala-females

b) Kerala -females

kerala-females

 

5. Literacy in Andhra Pradesh

AP just meets the national average for literacy.

a) Andhra Pradesh – males

andhra-males

b) Andhra Pradesh – females

andhra-females

6. Literacy in Arunachal Pradesh

Arunachal Pradesh is below average for most of the age groups

a) Arunachal Pradesh – males

arunachal-males

b) Arunachal Pradesh – females

arunachal-females

7. Literacy in  Assam

Assam is below national average

a) Assam – males

assam-males

b) Assam – females

assam-females

 

8. Literacy in Chattisgarh

Chattisgarh is on par with the national average for both men and women

a) Chattisgarh – males

chattisgarh-males

b) Chattisgarh – females

chattisgarh-females

 

9. Literacy in Gujarat

Gujarat is just about average

a) Gujarat – males

gujarat-males

b) Gujarat – females

gujarat-females

10. Literacy in Haryana

Haryana is slightly above average

a) Haryana – males

haryana-males

b) Haryana – females

haryana-females11.  Literacy in Himachal Pradesh

Himachal Pradesh is cool and above average.

a) Himachal Pradesh – males

himachal-males

 

b) Himachal Pradesh – females

himachal-females

12. Literacy in Jammu and Kashmir

J & K is marginally below average

a) Jammu and Kashmir – males

jk-males

b) Jammu and Kashmir – females

jk-females

 

13. Literacy in Jharkhand

Jharkhand is some ways below average

a) Jharkhand – males

jharkand-males

b) Jharkhand – females

jharkand-feamles

14. Literacy in Karnataka

Karnataka is on average for men. Womem seem to do better than men here

a) Karnataka – males

karnataka-males

b) Karnataka – females

karnataka-females

15. Literacy in Madhya Pradesh

Madhya Pradesh meets the national average

a) Madhya Pradesh – males

mp-males

b) Madhya Pradesh – females

mp-females

16. Literacy in Maharashtra

Maharashtra is front-runner in literacy

a) Maharashtra – females

maharashtra

b) Maharastra – females

maharashtra-feamles

 

17. Literacy in Odisha

Odisha meets national average

a) Odisha – males

odisha-males

b) Odisha – females

odisha-females

 

18. Literacy in  Punjab

Punjab is marginally above average with women doing even better

a) Punjab – males

punjab-males

b) Punjab – females

punjab-females19. Literacy in Rajasthan

Rajasthan is average for males and below average for females

a) Rajasthan – males

rajashthan-males

b) Rajasthan – females

rajasthan-females20. Literacy in Uttarakhand

Uttarakhand rocks and is above average

a) Uttarakhand – males

uttarkhan-males

b) Uttarakhand – females

uttarkhand-females

21. Literacy in West Bengal

West Bengal just about meets the national average.

a) West Bengal – males

wb-males

 

b) West Bengal – females

wb-females

The code can be cloned/forked from GitHub  india-literacy. I have done my analysis on the overall data. The data is further sub-divided across districts in each state and further into urban and rural. Many different ways of analysing are possible. One method is shown here

Conclusion

  1. Kerala is clearly head and shoulders above all states when it comes to literacy
  2. Many states are above average. They are Kerala, Himachal Pradesh, Uttarakhand, Tamil Nadu, Haryana, Himachal Pradesh, Karnataka, Maharashtra, Punjab, Uttarakhand
  3. States with average literacy are – Karnataka, Andhra Pradesh, Chattisgarh, Gujarat, Madhya Pradesh, Odisha, West Bengal
  4. States which fall below the national average are – Uttar Pradesh, Bihar, Jharkhand, Arunachal Pradesh, Assam, Jammu and Kashmir, Jharkhand, Rajasthan

See also
– A crime map of India in R: Crimes against women
– What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
– Bend it like Bluemix, MongoDB with autoscaling – Part 1

Statistical learning with R: A look at literacy in Tamil Nadu


In this post I make my first foray into data mining using the R language. As a start, I picked up the data from the Open Government Data (OGD Platform of India from the Ministry of Human Resources. There are many data sets under Education. To get started I picked the data set on Tamil Nadu which deals with the population attending educational institutions by age, sex and institution type. Similar data is available for all states.

I wanted to start off on a small scale, primarily to checkout some of the features of the R language. R is clearly the language of choice for processing large amounts of data. R has close to 4000+ packages that can do various things like statistical, regression analysis etc. However I found this is no easy task. There are a zillion ways in which you can take cross-sections of a large dataset. Some of them will provide useful insights while others will lead you nowhere.

Data science, which is predicted to be the technology of the future based on with the mountains of data being generated daily, will in my opinion, will be more of an art and less of a science. There will be wizards who will be able to spot remarkable truths in the mundane data while others will not be that successful.

Anyway back to my attempt to divine intelligence in the Tamil Nadu(TN)  literacy data. The data downloaded was an Excel sheet with 1767 rows and 28 columns. The first 60 rows deal with the overall statistics of literacy in Tamil Nadu state as a whole. Further below are the statistics on the individual districts of Tamil nadu.
Each of this is further divided into urban and rural parts. The data covers persons from the age of 4 upto the age of 60 and whether they attended school, college, vocational institute etc. To make my initial attempt manageable I have just focused on the data for Tamil Nadu state as a whole including the breakup of the urban and rural data.

My analysis is included below. The code and the dataset for this implementation is in R language and can be cloned from GitHub at tamilnadu-literacy-analysis

Analysis of Tamil Nadu (total)
The total population of Tamil Nadu based on an age breakup is shown below

1) Total population Tamil Nadu 
tntotal

2) Males  & Females attending education institutions in TN

tneduThere are marginally more males attending educational institutions. Also the number of persons attending educational institutions seems to drop from 11 years of age. There is a spike around 20-24 years and people go to school and college at this age. See pie chart 8) below

3) Percentage of males attending educational institution of the total males

percenteduM

4) Percentage females attending educational institutions in TN 

percenteduF

There is a very similar trend between males and females. The attendance peaks between 9 – 11 years of age and then falls to roughly 50% around 15-19 years and rapidly falls off

5) Boys and girls attending school in TN

tnschool

For some reason there is a marked increase for boys and girls around 20-24. Possibly people repeat classes around this age

6) Persons attending college in TN

tncollege

7) Educational institutions attended by persons between 15- 19 years

tnschool-1

8) Educational institutions attended by persons between 20-24 tnschool-2

As can be seen there is a large percentage (30%)  of people in the 20-24 age group who are in school. This is probably the reason for the spike in “Boys and girls attending school in TN” see 2) for the 20-24 years of age

Education in rural Tamil Nadu

1) Total rural population Tamil Nadu 

ruraltotal

2) Males  & Females attending education institutions in  rural TNruraledu3) Percentage of rural males attending educational institution

percentruralM

4) Percentage females attending educational institutions in rural TN of total females

percentruralF

The persons attending education drops rapidly to 40% between 15-19 years of age for both males and females

5) Boys and girls attending school in rural TN

ruralschool

6) Persons attending college in rural TN

ruralcollege

7) Educational institutions attended by persons between 15- 19 years in rural TN

rural-1

8) Educational institutions attended by persons between 20-24 in rural TN

rural-2

As can be seen there is a large percentage (39%)  of rural people in the 20-24 age group who are in school

Education in urban Tamil Nadu

1) Total population in urban Tamil Nadu 

urbantotal

2) Males  & Females attending education institutions in urban TNurbanedu3) Percentage of males attending educational institution of the total males in urban TNpercentruralM4) Percentage females attending educational institutions in urban TN

percenturbanF

5) Boys and girls attending school in urban TN

urbanschool

6) Persons attending college in urban TN

urbancollege

7) Educational institutions attended by persons between 15- 19 years in urban TN

urban-1

8) Educational institutions attended by persons between 20-24 in urban TN

urban-2

 

As can be seen there is a large percentage (25%) of rural people in the 20-24 age group who are in school

The R implementation and the Tamil Nadu dataset can be cloned from my repository in GitHub at tamilnadu-literacy-analysis 

The above analysis is just one of a million possible ways the data can be analyzed and visually represented. I hope to hone my skill as progress along in similar analysis.

Hasta la vista! I’ll be back.

Watch this space!

An Octave primer


Here is a simple Octave Primer. Octave is a powerful language for implementing Machine Learning algorithms. As I have mentioned its strength is its simplicity. I am including some basic commands with which you can get by implementing fairly complex code

%%Matrix
A matrix can be created as a = [1 2 3; 4 7 8; 12 35 14]; % This is 3 x 3 matrix
Matrix multiplication can be done between m x n * n x k matrix as follows

a = [4 56 3; 2 3 4]; b = [23 1; 3 12; 34 12]; % a = 3 x 2 matrix b = 2 x 3 matrix
c = a*b; %% c = 3 x 2 * 2 * 3 = 3 x 3 matrix

c =
362 712
191 86

%%Inverse of a matrix can be obtained by
d = pinv(c);
octave-3.2.4.exe:37> d = pinv(c)
d =
-8.2014e-004 6.7900e-003
1.8215e-003 -3.4522e-003

%%Transpose of a matrix
e = c'; % e is the transpose of done

octave-3.2.4.exe:38> e = c'
e =
362 191
712 86

The following operations are done on all elements of a matrix or a vector
k = 5;
a = [1 2; 3 4; 5 6]; k = 5.23;
c = k * a;
d = a - 2
e = a / 5
f = a .* a % Dot product
g = a .^2; % Square each elements

%% Select slice of matrix
b = a(:,2); % Select column 2 of matrix a (all rows)
c = a(2,:) % Select row of matrix 'a' (all columns)

d = [7 8; 8 9; 10 11; 12 13]; % 4 rows 2 columns
d(2:3,:); %Select from rows 2 to 3 (all columns)

octave-3.2.4.exe:41> d
d =
7 8
8 9
10 11
12 13
octave-3.2.4.exe:43> d(2:3,:)
ans =
8 9
10 11

%% Appending rows to matrix
a = [ 4 5; 5 6; 5 7; 9 8]; % 4 x 2
b = [ 1 3; 2 4]; % 2 x 2
c = [ a; b] % stack a over b
d = [b ; a] % stack b over a*b

octave-3.2.4.exe:44> a = [ 4 5; 5 6; 5 7; 9 8] % 4 x 2
a =
4 5
5 6
5 7
9 8

octave-3.2.4.exe:45> b = [ 1 3; 2 4] % 2 x 2
b =
1 3
2 4

octave-3.2.4.exe:46> c = [ a; b] % stack a over b
c =
4 5
5 6
5 7
9 8
1 3
2 4

octave-3.2.4.exe:47> d = [b ; a] % stack b over a*b
d =
1 3
2 4
4 5
5 6
5 7
9 8

%% Appending columns
a = [ 1 2 3; 3 4 5]; b = [ 1 2; 3 4];
c = [a b];
d = [b a];

octave-3.2.4.exe:48> a = [ 1 2 3; 3 4 5]
a =
1 2 3
3 4 5

octave-3.2.4.exe:49> b = [ 1 2; 3 4]
b =
1 2
3 4

octave-3.2.4.exe:50> c = [a b]
c =
1 2 3 1 2
3 4 5 3 4

octave-3.2.4.exe:51> d = [b a]
d =
1 2 1 2 3
3 4 3 4 5
%%Size of a matrix
[c d ] = size(a);

Creating a matrix of all zeros or ones
d = ones(3,2);
e = zeros(4,3);

%Appending an intercept term to a matrix
a = [1 2 3; 4 5 6]; %2 x 3
b = ones(2,1);
a = [b a];

%% Plotting
Creating 2 vectors
x = [1 3 4 5 6];
y = [5 6 7 8 9];
plot(x,y);

%%Create labels
xlabel("X values); ylabel("Y values);
axis([1 10 4 10]); % Set the range of x and y
title("Test plot);

%%Creating a 3D scatter plot
If we have 3 column csv file then we can load the data as follows
data = load('values.csv');
X = data(:, 1:2);
y = data(:, 3);
scatter3(X(:,1),X(:,2),y,[],[240 15 15],'x'); % X(:,1) - x axis X(:,2) - yaxis y[] - z axis

%% Drawing a 3D mesh
x = linspace(0,xrange + 20,10);
y = linspace(1,yrange+ 20,10);
[XX, YY ] = meshgrid(x,y);

[a b] = size(XX)

Draw the mesh
for i=1:a,
for j= 1:b,
ZZ(i,j) = [1 (XX(i,j)-mu(1))/sigma(1) (YY(i,j) - mu(2))/sigma(2) ] * theta;
end;
end;
mesh(XX,YY,ZZ);

For more details please see post Informed choices using Machine Learning 2- Pitting Kumble, Kapil and B S Chandra
kapil-2

%% Creating different polynomial equations
Let X be a feature vector
then
X = [X X.^2 X^3] %X X^2 X^3

This can be created using a for loop as follows
for i= 1:n
xtemp = xinput .^i;
x = [x xtemp];
end;

 

Finally while doing multivariate regression if we wanted to create polynomial terms of higher we could do as follows. Let us say we have a feature vector X made of 3 features x1, x2,

Let us say we wanted to create a polynomial of the form x1^2 x1.x2 x2^2 then we could create X as

X = [X(:,1) .^2 X(:,1) . X(:,2) X(:,2) .^2]

As you can see Octave is really powerful language for Machine Learning and has just a few handful of constructs with which one can implement powerful Machine Learning algorithms

How to program – Some essential tips


If one follows the arrow of time from the early 1980s to the present day, the number of programming problems have not only proliferated but have also become more difficult. Fortunately  programming in itself has  become more manageable with massive increases in computing horsepower, smarter tools and instant availability of information on the internet, typically with the click of a mouse.

Learning to program is no easy task, but can be done with the right mix of attitude, curiosity and interest. Becoming adept at programming, however, is something else. An interesting essay in this context is Peter Norvig’s ‘Teach yourself programming in 10 years’

Back in the 1980s when I wrote my first Fortran program on my college Mainframe, programming was a lengthy exercise, spanning several days.

1

My first program was to plot a sine wave of characters on a computer printout. Running this program required the following several steps

  1. Enter the program on a teletype terminal and create a stack of Hollerith (punched) cards
  2. Submit the stack of cards to the computer center
  3. The computer center would do a batch execute in the evening on the Mainframe
  4. God forbid, if your program has a syntax error. If you did find an error, go back to step 1, the next day.
  5. Assuming everything is fine, the computer center would run your program and your output (printout) would be placed in the appropriate pigeon hole which you would need to pick up the next day.

The whole exercise to write a small-sized program could take anywhere between a couple of days to a whole week.

In the early 1990s things got a little better where one could code, compile, link and execute sitting at one’s desk. However while the programming itself got much simpler than before, certain tasks were still difficult.  Till the late 90s programs of any sort had to be written using a regular text editor (vi , emacs etc.)  You would then have to go through the process of compiling, linking and executing.

An angry compiler would typically spew forth venom at missing semi-colons, undeclared variables, and uninitialized values. This would happen till you are able to iron out all syntax errors.  Then you would link, get undefined symbols and have to include appropriate libraries etc. And then finally you would execute your code, only to have it crash. The process of debugging would then start.

Luckily technology has made life a whole lot easier except for the last step where you could still  run into an execution errors . In these days an IDE (Interactive Development Environment) like Eclipse will flag syntax errors, missing definitions/declarations etc. as you write your code. Moreover Eclipse can also indicate which libraries (imports) you would need to include in your package for it to build. The only missing step in IDEs of these days is the ability to predict possible execution errors in your program.  I wouldn’t be surprised, if in future, like Microsoft Word,  the IDE is able to tell you if a programming construct does not make sense.

So things have gotten a lot easier for the programmer. The following tips for are particularly useful as you progress along in programming

  1. These days when you are learning a new programming language it is not necessary to know the language from cover to cover by reading a book. In those days when we learnt C it was necessary to know everything from bit structures, macros, pragma etc. The reason being that every syntax or execution error one had to rush to get the textbook and thumb through it for the answer. Not so, in these days of Google. You have the world’s library at your fingertips.
  2. To get started it is necessary to learn just the most important programming constructs of the language say structure, class, car, cdr besides the usual suspects like loops, conditions and case constructs
  3. Download and install an IDE for the language. In most case Eclipse will work
  4. Try to write a simple program and test out your code.
  5. To do any sort of programming these days you will necessarily need to make 3 friends
    1. Google
    2. Stackoverflow
    3. Git & GitHub
  6. Honing your Googling skills is very important. There are answers to almost any sort of programming problems out there. You would be surprised to know that there are many others who did exactly the same stupid mistake that you did out there. Also googling will take you to interesting tutorials, blogs, articles that discuss different aspects of the programming language and the problem you are trying to solve
  7. Stackoverflow is really a God send to all programmers. There are so many questions on so many aspects of every programming language on earth there. If you spend time searching Stackoverflow you are bound to find answers, code snippets that you can readily use in your code
  8. Post your questions in stackoverflow when you don’t find the answers there. You are bound to get quick answers. Thanks to the gamification of Stackoverflow (points, upvotes,badges  etc) that has been created on Stackoverflow.
  9. Git & GitHub: I would suggest that you download and install GitHub for Windows. This will provide you with version control on your desktop. You can modify code while being to switch back to an earlier version with Git. Read up a good tutorial on Git for Windows
  10. Once you have working code you push it onto GitHub and share with other programmers

Now that you have the basic setup here are few other extremely important tips

  1. The most important criteria for programming is ‘attitude’. Initially you are bound to get frustrated, angry, irritated etc. But it is necessary to look at the errors that you get with the right attitude. Know that an error is telling you something. Usually the answers to your mistake are in the ‘error message’ itself. Look at it closely and try to understand it. You will learn a lot more when you learn from errors than from copy-pasting from somebody else’s code, even if works right the first time around!
  2. Make sure you do something different each time. As Einstein said “ If you keep doing the same thing, you will keep getting the same result’
  3. There are different ways to debug your code. You could use the debugger and single step through the code and keep checking the values of the variables. I personally prefer print statements to localize where things are going wrong. I then try to narrow down the problem to a few lines of code and try to take it apart.

Hopefully the above tips are useful. Programming can be creative activity and will be indispensable in our future.

Above all have fun coding, there are so many possibilities these days!

Also see Programming Zen and now – Some essential tips-2

Applying the principles of Machine Learning


While working with multivariate regression there are certain essential principles that must be applied to ensure the correctness of the solution while being able to pick the most optimum solution. This is all the more important when the problem has a large number of features. In this post I apply these important principles to a regression data set which I was able to pull of the internet. This data set was taken from the UCI Machine Learning repository and deals with Boston housing data.  The housing data provides the cost of house in Boston suburbs given the number of rooms, the connectivity to main highways, and crime rate in the area and several other data.  There are a total of 506 data points in this data set with a total of 13 features.

This seemed a reasonable dataset to start to try out the principles of Machine Learning I had picked up from Coursera’s ML course.

Out of a total of 13 features 2 features ’ZN’ and ‘CHAS’ proximity to  Charles river were dropped as the values were mostly zero in these columns . The remaining 11 features were used to map to the output variable of the price.

The following key rules have been applied on the

  • The dataset was divided into training samples (60%), cross-validation set (20%) and test set (20%) using a random index
  • Try out different polynomial functions while performing gradient descent to determine the theta values
  • Different combinations of ‘alpha’ learning rate and ‘lambda’ the regularization parameter were tried while performing gradient descent
  • The error rate is then calculated on the cross-validation and test set
  • The theta values that were obtained for the lowest cost for a polynomial was used to compute and plot the learning curve for the different polynomials against increasing number of training and cross-validation test samples to check for bias and variance.
  • The plot of the cost versus the polynomial degree was plotted to obtain the best fit polynomial for the data set.

A multivariate regression hypothesis can be represented as

hθ(x) = θ0 + θ1x1 + θ2x2 + θ3x3 + θ4x4 + …
And the cost can is determined as
J(θ0, θ1, θ2, θ3..) = 1/2m ∑ (hΘ (xi) -yi)2
The implementation was done using Octave. As in my previous posts some functions have not been include to comply with Coursera’s Honor Code. The code can be cloned from GitHub at machine-learning-principles

a) housing compute.m. In this module I perform gradient descent for different polynomial degrees and check the error that is obtained when using the computed theta on the cross validation and test set

max_degrees =4;
J_history = zeros(max_degrees, 1);
Jcv_history = zeros(max_degrees, 1);
for degree = 1:max_degrees;
[J Jcv alpha lambda] = train_samples(randidx, training,cross_validation,test_data,degree);
end;

b) train_samples.m – This module uses gradient descent to check the best fit for a given polynomial degree for different combinations of alpha (learning rate) and lambda( regularization).

for i = 1:length(alpha_arr),
for j = 1:length(lambda_arr)
alpha = alpha_arr{i};
lambda= lambda_arr{j};
% Perform Gradient descent
% Compute error for training sample for computed theta values
% Compute the error rate for the cross validation samples
% Compute the error rate against the test set
end;
end;

c) cross_validation.m – This module uses the theta values to compute cost for the cross validation set

d) test-samples.m – This modules computes the error when using the trained theta on the test set

e) poly.m – This module constructs polynomial vectors based on the degree as follows
function [x] = poly(xinput, n)
x = [];
for i= 1:n
xtemp = xinput .^i;
x = [x xtemp];
end;

e) learning_curve.m – The learning curve module plots the error rate for increasing number of training and cross validation samples. This is done as follows. For the theta with the lowest cost as determined by gradient descent
for i from 1 to 100

  • Compute the error for ‘i’ training samples
  • Compute the error for ‘i’ cross-validation
  • Plot the learning curve to determine the bias and variance of the polynomial fit

This is included below
for i = 1: 100
xsample = xtrain(1:i,:);
ysample = ytrain(1:i,:);
size(xsample);
size(ysample);
[xsample] = poly(xsample,degree);
xsample= [ones(i, 1) xsample];
[c d] = size(xsample);
theta = zeros(d, 1);
% Minimize using fmincg
J = computeCost(xsample, ysample, theta);
Jtrain(i) = J;
xsample_cv = xcv(1:i,:);
ysample_cv = ycv(1:i,:);
[xsample_cv] = poly(xsample_cv,degree);
xsample_cv= [ones(i, 1) xsample_cv];
J_cv = computeCost(xsample_cv, ysample_cv,theta)
Jcv(i) = J_cv;
end;

Finally a plot is done been different lambda and the cost.

The results are included below

A) Polynomial degree 1
Convergence graph
convergence-1

Learning curve
learning-curve-1

The above figure does show a stronger bias. Note: the learning curve was done with around 100 samples
B) Polynomial degree 2

Convergence graph
convergence-2

Learning curve
learning-curve-2

The learning curve for degree 2 shows a stronger variance.

C) Polynomial degree 3
Convergence graph

convergence-3

Learning curve
learning-curve-3

D) Polynomial degree 4
Convergence graph
convergence-4

E) Learning curve
learning-curve-4

This plot is useful to determine which polynomial degree will give the best fit for the dataset and the lowest cost

degree-cost-1

Clearly from the above it can be seen that degree 2 will give a good fit for the data set.

F) Lambda vs Cost function
lambda-cost-1

The above code demonstrates some key principles while performing multivariate regression
The code can be cloned from GitHub at machine-learning-principles