Into the Telecom vortex


“Ten little Indian boys went out to dine,
One choked his little self and then there were nine
Nine little Indian boys sat up very late;
One overslept himself and then there were eight…”

From the poem “Ten Little Indians”

a

You don’t need to be particularly observant to notice that the telecom landscape over the last decade and a half is full of dead organizations, bloodshed and gore. Organizations have been slain by ruthless times and bigger ones have devoured the weaker, fallen ones. Telecom titans have vanished, giants have been reduced to dwarfs.

Some telecom companies have merged in a deadly embrace trying to beat the market forces only to capitulate to its inexorable death march.

The period from the early 1980s to the late 1990’s were the glorious periods for telecommunication. Digital switches (1972-1982), ISDN (1988), international calling, trunk protocols, mobile (~1991), 2G, 2.5G, and 3G moved in succession, one after another.

Advancement came after advancement. The future had never looked so bright for telecom companies.

The late 1990’s were heady years, not just for telecom companies, but to all technology companies. Stock prices soared. Many stocks were over-valued.  This was mainly due to what was described as the ‘irrational exuberance’ of the stock market.

Lucent, Alcatel, Ericsson, Nortel Networks, Nokia, Siemens, Telecordia all ruled supreme.

1997-2000. then the inevitable happened. There was the infamous dot-com bust of the 2000 which sent reduced many technology stocks to penny stocks. Telecom company stocks went into a major tail spin.  Stock prices of telecom organizations plummeted. This situation, many felt, was further exacerbated by the fact that nothing important or earth shattering was forth-coming from the telecom. In other words, there was no ‘killer app’ from the telecommunication domain.

From 2000 onwards 3G, HSDPA, LTE etc. have all come and gone by. But the markets were largely unimpressed. This was also the period of the downward slide for telecom. The last decade and a half has been extra-ordinarily violent. Technology units of dying organizations have been cannibalized by the more successful ones.

Stellar organizations collapsed, others transformed into ‘white dwarfs’, still others shattered with the ferocity of a super nova.

Here is a short recap of the major events.

  • 2006 – After a couple of unsuccessful attempts Alcatel and Lucent finally decide to merge
  • 2006 – Nokia marries Siemens in a 20 billion Euro deal. N
  • 2009-10 – Ericsson purchases Nortel’s CDMA and LTE business for $1.13 billion
  • 2009-10 – Nortel implodes
  • 2010 – Motorola sells networking unit to Nokia for $1.2 Billion
  • 2011 – Internet giant Google mops up Motorola’s handset division for $12.5 billion, largely for the patents
  • 2012 – Ericsson closes a deal with Telcordia for $1.15 billion
  • 2013 – Nokia sells its handset division to Microsoft after facing a serious beating from smartphones
  • 2015 – Nokia agrees to a $16.6 billion takeover of Alcatel Lucent

And so the story continues like the rhyme in Agatha Christie’s mystery novel

And then there were none

Ten little Indian boys went out to dine,                                                                                                                
One choked his little self and then there were nine…”

The Telecom companies continue their search for the elusive ‘killer app’ as progress comes in small increments – 3G, 3.5G, 3.75G, 4G, and 5G etc.

Personally I think the future of Telecom companies, lies in its ability to embrace the latest technologies of Cloud Computing, Big Data, Software Defined Networks, and Software Defined Datacenters and re-invent themselves. Rather than looking for some elusive ‘killer app’ they have to re-enter the technology scene with a Big Bang

As I referred to in one of my earlier posts “Architecting a cloud Based IP Multimedia System” the proverbial pot at the end of the rainbow may be in

  1. Virtualizing IP Multimedia Switches (IMS) namely the CSCFs (P-CSCF, S-CSCF, I-CSCF etc.),
  2. Using the features of the cloud like Software Defined Storage (SDS) , Load balancers and auto-scaling to elastically scale-up or scale down the CSCF instances to handle varying ‘call traffic’
  3. Having equipment manufacturers (Nokia, Ericsson, and Huawei) will have to use innovating pricing models with the carriers like AT&T, MCI, Airtel or Vodafone. Instead of a one-time cost for hardware and software, the equipment manufacturers will need to charge based on usage or call traffic (utility charging). This will be a win-win for both the equipment manufacturer and carrier
  4. Using SDN to provide the necessary virtualized pipes between users with the necessary policies for advanced services like video-chat, white-boarding, real-time gaming etc.
  5. Using Big Data and Hadoop to analyze Call Detail Records (CDRs) and provide advanced services to customers like differential rates for calls etc

Clearly there will be challenges in this virtualized view of things. Telecom equipment is renowned for its 5 9’s availability. The challenge will be achieving this resiliency, high availability and fault-tolerance with cloud servers. How can WAN latencies be mitigated? How to can SDN provide the QoS required for voice, video and data traffic in IMS?

IMS has many interesting services where video calls from laptops can be transferred as data calls to mobile phones and vice versa, from mobile networks to WiFi  and so on.

Many hurdles will have to be crossed. But this is, in my opinion, will be the path forward.

While the last decade and a half have been bad for the telecom industry, I personally feel we are on the verge on the next big breakthrough in telecom in the next year or two. Telecom will rise like the phoenix from its ashes in the next couple of years

Also see
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data
6. Deblurring with OpenCV:Weiner filter reloaded

TWS-4: Gossip protocol: Epidemics and rumors to the rescue


Having successfully completed a grueling yet enjoyable ‘Cloud Computing Concepts’ course at Coursera, from the University of Illinois at  Urbana-Champaign,  by Prof Indranil Gupta, I continue on my “Thinking Web Scale (TWS)” series of posts. In this post, I would like to dwell on Gossip Protocol.

Gossip protocol finds its way into distributed system from Epidemiology, a branch of science, which studies and models how diseases, rumors spread through society.   The gossip protocol disseminates information –  the way diseases, rumors spread in society or the way a computer virus is able to infect large networks very rapidly

Gossip protocol is particularly relevant in large distributed systems with hundreds and hundreds of servers spread across multiple data centers for e.g.  Social networks like Facebook, Google or Twitter etc.. The servers that power Google’s search, or the Facebook or Twitter engine is made of hundreds of commercial off the shelf (COTS) computers. This is another way of saying that the designers of these systems should fold extremely high failure rates of the servers into their design. In other words “failures will be the norm and not the exception”

As mentioned in my earlier post, in these large distributed systems  servers will be fail and new servers will be continuously joining the system. The distributed system must be able to accommodate servers joining or leaving the system. There is no global clock and each server has its own clock. To handle server failures data is replicated over many servers which obviously leads to issues of maintaining data consistency between the replicas.

A well-designed distributed system must include in its design key properties of

  1. Availability – Data should be available when you want it
  2. Consistency – Data should consistent across multiple copes
  3. Should be fault tolerant
  4. Should be scalable
  5. Handle servers joining or leaving the systems transparently

One interesting aspect of Distributed Systems much like Operating System (OS) is the fact that a lot of the design choices are based on engineering judgments. The design choices are usually a trade-off of slightly different performance characteristics. Some of them are obvious and some not so obvious.

Why Gossip protocol? What makes it attractive?

Here are some approaches

  1. Centralized Server:

Let us assume that in a network of servers we have a server (Server A) has some piece of information which it needs to spread to other servers. One way is to have this server send the message to all the servers. While this would work there are 2 obvious deficiencies with this approach

  1. The Server A will hog the bandwidth in transmitting the information to all other servers
  2. Server A will be a hot spot besides also being a Single Point of Failure

Cons: In other words if we have a central server always disseminating information then we run into the issue of ‘Single point of Failure’ of this central  server.

  1. Directed Graph

Assuming that we construct a directed overlay graph over the network of servers, we could transmit the message from server A to all other servers. While this approach, has the advantage of lesser traffic as  each server node will typically have around a 1 -3 children. This will result in lesser bandwidth utilization. However the disadvantage to this approach, will be that , when an intermediate non-leaf node fails then information will not reach all children of the failed nodes.

 Cons: Does not handle failures of non-leaf nodes well

  1. Ring Architecture

In this architecture we could have Server A, pass the message round the ring till it gets to the desired server. Clearly each node has one predecessor and one successor. Like the previous example this has the drawback that if one or more servers of the ring fail then the message does not get to its destination.

Cons: Does not handle failures of nodes in the ring well

Note: We should note that these engineering choices only make sense in certain circumstances. So for e.g. the directed graph or the ring structure discussed below have deficiencies for the distributed system case, however  these are accepted design patterns in computer networking for e.g. the Token Ring IEEE 802.5 and graph of nodes in a network. Hierarchical trees are the norm in telecom networks where international calls reach the main trunk exchange, then the central office and finally to the local office in a route that is a root-non-leaf-leaf route.

  1. Gossip protocol

Enter the Gossip protocol (here is a good summary on gossip protocol). In the gossip protocol each server sends the message to ‘b’ random peers. The value ‘b’ typically a small number is called the fan-out.  The server A which has the data is assumed to be ‘infected’. In the beginning only server A is infected while all other servers are ‘susceptible’.  Each server receiving the message is now considered to be infected. Each infected server transmits to ‘b’ other servers. It is likely that the receiving sever is already infected in which case it will drop the message.

In many ways this is similar to the spread of a disease is through a virus. The disease spreads when an infected person comes in contact with another person.

The nice part about the gossip protocol is that is light weight and it can infect the entire set of servers in the order of O (log N)

This is fairly obvious as each round the ‘b’ infected servers will infect ‘b*n’ other servers where ‘n’ is the fan-out.
The computation is as follows

Let x0 = n (Initial state, all un-infected) and y0 =1 (1 infected server) at time t = 0
With x0 + y= n + 1 at all times

Let β be the contact rate between the ‘susceptible’ and ‘infected’  (x*y), then the rate of infection can be represents as
dx/dt= -βxy

The negative sign indicates that the number of ‘non-infected’ servers will decrease over time
(It is amazing how we can capture the entire essence of the spread of disease through a simple, compact equation)

The solution for the above equation (which I have taken in good faith, as my knowledge in differential equations is a faint memory. Hope to refresh my memory when I get the chance, though!)
x=n(n+1)/(n+e^β(n+1)t )  – 1
y=(n+1)/(1+ne^(-β(n+1)t)) – 2

The solution (1)  clearly shows that the number ‘x’ of un-infected servers  at time‘t’ rapidly to 0 as the denominator becomes too large. The number of infected units ‘y’  as t increases tends to n+1, or in other words all servers get infected

This method where infected server sends a message to ‘b’ servers is known as the ‘push’ approach.

Pros: The Gossip protocol clearly is more resilient to servers failing as the gossip message is sent a ‘b’ random targets and can handle failures better.
Cons: There is a possibility that the ‘b’ random targets selected for infection are already infected, in which case the infection can die rapidly if these infected servers fail. 

The solution for the above is to have a ‘pull’ approach where after a time ‘t’ the un-infected servers pull the data from random servers. This way the un-infected servers will also get infected if they pull the data from already infected servers

A third approach is to have a combination of a push-pull approach.
Gossip has been used extensively in Facebook’s and Apache’s Cassandra NoSQL database. Amazon’s Dynamo DB and Riak NoSQL DB also use forms of Gossip Protocol

Failure detection: Gossip protocol has been used extensively in detecting failures. The failed servers are removed from the membership list and this is list is gossiped so that all servers have a uniform view of the set of live servers. However, as with any approach this is prone to high rate  false-positives,  where servers are assumed to have failed even though this may have been  marked as ‘failed’ because of a temporary network failure.   Moreover the network load on epidemic style membership lists are also high.

Some methods to handle false positives is to initially place failed servers under a ‘suspicion’.  When the number of messages attributing failure to this server increases above a threshold ‘t’, then the server is assumed to have failed and removed from the membership list.

Cassandra uses a failure ‘accrual’ mechanism to detect failures in the distributed NoSQL datanase

Epidemic protocols, like the gossip protocol are particularly useful in large scale distributed systems where servers leave and join the system.

One interesting application of the epidemic protocol is to simply to collect the overall state of the system.  If we consider an information exchange where all nodes have set an internal value xi = 0 except node 1 which has x1=1 (infected)  (from the book Distributed Systems: Principles & paradigms by Andrew Tannenbaum and Maarten Van Steen)

where xi = 1 if i =1, or 0 if i > 1
If the nodes gossip this value and compute the average (xi + xj) /2, then after a period of time this value will tend towards 1/N where N is the total number of nodes in the system. Hence all the servers in the system will become aware of the total size of the system.

Conclusion: Gossip protocol has widespread application in distributed systems of today, from spreading information, membership, failure detection, monitoring and alarming. It is really interesting to note that the theory of epidemics or disease spread from a branch of sociology become so important in a field of computer science.

Also see
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data

Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data


In the last decade and a half, there has arisen a class of problem that are becoming very critical in the computing domain. These problems deal with computing in a highly distributed environments. A key characteristic of this domain is the need to grow elastically with increasing workloads while tolerating failures without missing a beat.  In short I would like to refer to this as ‘Web Scale Computing’ where the number of servers exceeds several 100’s and the data size is of the order of few hundred terabytes to several Exabytes.

There are several features that are unique to large scale distributed systems

  1. The servers used are not specialized machines but regular commodity, off-the-shelf servers
  2. Failures are not the exception but the norm. The design must be resilient to failures
  3. There is no global clock. Each individual server has its own internal clock with its own skew and drift rates. Algorithms exist that can create a notion of a global clock
  4. Operations happen at these machines concurrently. The order of the operations, things like causality and concurrency, can be evaluated through special algorithms like Lamport or Vector clocks
  5. The distributed system must be able to handle failures where servers crash, disk fails or there is a network problem. For this reason data is replicated across servers, so that if one server fails the data can still be obtained from copies residing on other servers.
  6. Since data is replicated there are associated issues of consistency. Algorithms exist that ensure that the replicated data is either ‘strongly’ consistent or ‘eventually’ consistent. Trade-offs are often considered when choosing one of the consistency mechanisms
  7. Leaders are elected democratically.  Then there are dictators who get elected through ‘bully’ing.

In some ways distributed systems behave like a murmuration of starlings (or a school of fish),  where a leader is elected on the fly (pun unintended) and the starlings or fishes change direction based on a few (typically 6) closest neighbors.

This series of posts, Thinking Web Scale (TWS) ,  will be about Web Scale problems and the algorithms designed to address this.  I would like to keep these posts more essay-like and less pedantic.

In the early days,  computing used to be done in a single monolithic machines with its own CPU, RAM and a disk., This situation was fine for a long time,  as technology promptly kept its date with Moore’s Law which stated that the “ computing power  and memory capacity’ will  double every 18 months. However this situation changed drastically as the data generated from machines grew exponentially – whether it was the call detail records, records from retail stores, click streams, tweets, and status updates of social networks of today

These massive amounts of data cannot be handled by a single machine. We need to ‘divide’ and ‘conquer this data for processing. Hence there is a need for a hundreds of servers each handling a slice of the data.

The first post is about the fairly recent computing paradigm “Map-Reduce”.  Map- Reduce is a product of Google Research and was developed to solve their need to calculate create an Inverted Index of Web pages, to compute the Page Rank etc. The algorithm was initially described in a white paper published by Google on the Map-Reduce algorithm. The Page Rank algorithm now powers Google’s search which now almost indispensable in our daily lives.

The Map-Reduce assumes that these servers are not perfect, failure-proof machines. Rather Map-Reduce folds into its design the assumption that the servers are regular, commodity servers performing a part of the task. The hundreds of terabytes of data is split into 16MB to 64MB chunks and distributed into a file system known as ‘Distributed File System (DFS)’.  There are several implementations of the Distributed File System. Each chunk is replicated across servers. One of the servers is designated as the “Master’. This “Master’ allocates tasks to ‘worker’ nodes. A Master Node also keeps track of the location of the chunks and their replicas.

When the Map or Reduce has to process data, the process is started on the server in which the chunk of data resides.

The data is not transferred to the application from another server. The Compute is brought to the data and not the other way around. In other words the process is started on the server where the data, intermediate results reside

The reason for this is that it is more expensive to transmit data. Besides the latencies associated with data transfer can become significant with increasing distances

Map-Reduce had its genesis from a Lisp Construct of the same name

Where one could apply a common operation over a list of elements and then reduce the resulting list of elements with a reduce operation

The Map-Reduce was originally created by Google solve Page Rank problem Now Map-Reduce is used across a wide variety of problems.

The main components of Map-Reduce are the following

  1. Mapper: Convert all d ∈ D to (key (d), value (d))
  2. Shuffle: Moves all (k, v) and (k’, v’) with k = k’ to same machine.
  3. Reducer: Transforms {(k, v1), (k, v2) . . .} to an output D’ k = f(v1, v2, . . .). …
  4. Combiner: If one machine has multiple (k, v1), (k, v2) with same k then it can perform part of Reduce before Shuffle

A schematic of the Map-Reduce is included below\

2

Map Reduce is usually a perfect fit for problems that have an inherent property of parallelism. To these class of problems the map-reduce paradigm can be applied in simultaneously to a large sets of data.  The “Hello World” equivalent of Map-Reduce is the Word count problem. Here we simultaneously count the occurrences of words in millions of documents

The map operation scans the documents in parallel and outputs a key-value pair. The key is the word and the value is the number of occurrences of the word. E.g. In this case ‘map’ will scan each word and emit the word and the value 1 for the key-value pair

So, if the document contained

“All men are equal. Some men are more equal than others”

Map would output

(all,1),  (men,1), (are,1), (equal,1), (some,1), (men,1), (are,1),  (equal,1), (than,1), (others,1)

The Reduce phase will take the above output and give sum all key value pairs with the same key

(all,1),  (men,2), (are,2),(equal,2), (than,1), (others,1)

So we get to count all the words in the document

In the Map-Reduce the Master node assigns tasks to Worker nodes which process the data on the individual chunks

3

Map-Reduce also makes short work of dealing with large matrices and can crunch matrix operations like matrix addition, subtraction, multiplication etc.

Matrix-Vector multiplication

As an example if we consider a Matrix-Vector multiplication (taken from the book Mining Massive Data Sets by Jure Leskovec, Anand Rajaraman et al

For a n x n matrix if we have M with the value mij in the ith row and jth column. If we need to multiply this with a vector vj, then the matrix-vector product of M x vj is given by xi

1

Here the product of mij x vj   can be performed by the map function and the summation can be performed by a reduce operation. The obvious question is, what if the vector vj or the matrix mij did not fit into memory. In such a situation the vector and matrix are divided into equal sized slices and performed acorss machines. The application would have to work on the data to consolidate the partial results.

Fortunately, several problems in Machine Learning, Computer Vision, Regression and Analytics which require large matrix operations. Map-Reduce can be used very effectively in matrix manipulation operations. Computation of Page Rank itself involves such matrix operations which was one of the triggers for the Map-Reduce paradigm.

Handling failures:  As mentioned earlier the Map-Reduce implementation must be resilient to failures where failures are the norm and not the exception. To handle this the ‘master’ node periodically checks the health of the ‘worker’ nodes by pinging them. If the ping response does not arrive, the master marks the worker as ‘failed’ and restarts the task allocated to worker to generate the output on a server that is accessible.

Stragglers: Executing a job in parallel brings forth the famous saying ‘A chain is as strong as the weakest link’. So if there is one node which is straggler and is delayed in computation due to disk errors, the Master Node starts a backup worker and monitors the progress. When either the straggler or the backup complete, the master kills the other process.

Mining Social Networks, Sentiment Analysis of Twitterverse also utilize Map-Reduce.

However, Map-Reduce is not a panacea for all of the industry’s computing problems (see To Hadoop, or not to Hadoop)

But the Map-Reduce is a very critical paradigm in the distributed computing domain as it is able to handle mountains of data, can handle multiple simultaneous failures, and is blazingly fast.

Also see
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid

To see all posts click ‘Index of Posts

From developerWorks – What’s up, Watson? Using Watson QAAPI with Bluemix and NodeExpress


2

My post in IBM developer Works – What’s up, Watson? Using Watson QAAPI with Bluemix and NodeExpress

Create a Bluemix™ application that uses Watson’s Question and Answer API (QAAPI). IBM’s Watson is capable of understanding the nuances of the English language. Bluemix now includes eight services from Watson, including Concept Expansion, Language Identification, Machine Translation, and Question and Answer. For more information on Watson’s QAAPI and the many services that have been included in Bluemix, see Watson Services.

The current release of Bluemix Watson is a corpus of medical facts. Watson has been made to ingest medical documents in multiple formats (doc, pdf, html, text, and so on), and the user can pose medical questions to the Watson QAAPI.

This tutorial shows how to use the Watson Question and Answer API to make queries and get results of various types

n the application described in this tutorial, NodeExpress is used to create a web server and to post questions to Watson using REST APIs. Jade is used to format the results of Watson’s response.

For more details and the latest code please see my full article in IBM developerWorks What’s up, Watson? Using Watson QAAPI with Bluemix and NodeExpress

Bend it like Bluemix, MongoDB with auto-scaling – Part 3


In this last post of this series, I test the performance of Bluemix & MongoDB against  concurrent queries and deletes to the cloud based app with Mongo DB, with auto-scaling on. Before I started these series of tests I moved the Overload policy a couple of notches higher and made it scale out if  memory utilization > 75% for 120 secs and < 30% for 120 secs (from the earlier 55% memory utilization) as shown below.

27

The code for bluemixMongo app can be forked from Devops at bluemixMongo or can be cloned from GitHub at  bluemix-mongo-autoscale. The multi-mechanize scripts can be downloaded from GitHub at multi-mechanize     Before starting the testing I checked the current number of documents inserted by the concurrent inserts (see Bend it like Bluemix., MongoDB using Auto-scaling – Part 2). The total number as determined by checking the logs was 1380 17   Sure enough with the scaling policy change after 2 minutes the number of instanced dropped from 3 to 2

18

1. Querying the bluemixMongo app with Multi-mechanize

The Multi-mechanize Python script used for querying the bluemixMongo app simply invokes the app’s userlist URL (resp=br.open(“http://bluemixmongo.mybluemix.net/userlist/&#8221;)

v_user.py

def run(self):
# create a Browser instance
br = mechanize.Browser()
# don"t bother with robots.txt
br.set_handle_robots(False)
# start the timer
start_timer = time.time()
#print("Display userlist")
# Display 5 random documents
resp=br.open("http://bluemixmongo.mybluemix.net/userlist/")
assert("Example Mongo Page" in resp.get_data())
# stop the timer
latency = time.time() - start_timer
self.custom_timers["Userlist"] = latency
r = random.uniform(1, 2)
time.sleep(r)
self.custom_timers['Example_Timer'] = r

The configuration setup for this script creates 2 sets of 10 concurrent threads

config.cfg
run_time = 300
rampup = 0
results_ts_interval = 10
progress_bar = on
console_logging = off
xml_report = off
[user_group-1]
threads = 10
script = v_user.py
[user_group-2]
threads = 10
script = v_user.py

The corresponding userlist.js for querying the app is shown below. Here the query is constructed by creating a ‘RegularExpression’ with  a random Firstname, consisting of a random letter and a random number. Also the query is also limited to 5 documents.

function(callback)
{
// Display a random set of 5 records based on a regular expression made with random letter, number
var randnum = Math.floor((Math.random() * 10) + 1);
var alpha = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','X','Y','Z'];
var randletter = alpha[Math.floor(Math.random() * alpha.length)];
var val =  randletter + ".*" + randnum + ".*";
// Limit the display to 5 documents
var results = collection.find({"FirstName": new RegExp(val)}).limit(5).toArray(function(err, items){
if(err) {
console.log(err + " Error getting items for display");
}
else {
res.render('userlist',
{ "userlist" : items
}); // end res.render
} //end else
db.close(); // Ensure that the open connection is closed
}); // end toArray function
callback(null, 'two');
}

2. Running the userlist query

The following screenshot shows the userlist query being executed concurrently with Multi-mechanize. Note that the number of  instances also drops down to 1

18

3. Deleting documents with Multi-mechanize

The multi-mechanize script for deleting a document is shown below. This script calls the URL with resp = br.open(“http://bluemixmongo.mybluemix.net/remuser&#8221;). No values are required to be entered into the form and the ‘submit’ is simulated.

v_user.py
def run(self):
# create a Browser instance
br = mechanize.Browser()
# don"t bother with robots.txt
br.set_handle_robots(False)
br.addheaders = [("User-agent", "Mozilla/5.0Compatible")]
# start the timer
start_timer = time.time()
# submit the request
resp = br.open("http://bluemixmongo.mybluemix.net/remuser")
#resp = br.open("http://localhost:3000/remuser")
resp.read()
# stop the timer
latency = time.time() - start_timer
# store the custom timer
self.custom_timers["Load_Front_Page"] = latency
# think-time
time.sleep(2)
# select first (zero-based) form on page
br.select_form(nr=0)
# set form field
br.form["firstname"] = ""
br.form["lastname"] = ""
br.form["mobile"] = ""
# start the timer
start_timer = time.time()
# submit the form
resp = br.submit()
resp.read()
print("Removed")
# stop the timer
latency = time.time() - start_timer
# store the custom timer
self.custom_timers["Delete"] = latency
# think-time
time.sleep(2)

config.cfg

The config file is set to start 2 sets of 10 concurrent threads and execute for 300 secs

[global]
run_time = 300
rampup = 0
results_ts_interval = 10
progress_bar = on
console_logging = off
xml_report = off
[user_group-1]
threads = 10
script = v_user.py
[user_group-2]
threads = 10
script = v_user.py
;

deleteuser.js

This Node.js script does a findOne() document and does a remove with the ‘justOne’ set to true

collection.findOne(function(err, item) {
// Delete just a single record
collection.remove(item, {justOne:true},(function (err, doc) {
if (err) {
// If it failed, return error
res.send("There was a problem removing the information to the database.");
}
else {
// If it worked redirect to userlist
res.location("userlist");
// And forward to success page
res.redirect("userlist");
}
}));
});
collection.find().toArray(function(err, items) {
console.log("Length =----------------" + items.length);
db.close();
});
callback(null, 'two');

4. Running the deleteuser multimechanize script

The output of the script executing and the reduction of the number of instances because of the change in the memory utilization policy is shown

21

5. Multi-mechanize

As mentioned in the previous posts

The multi-mechanize commands are executed as follows
To create a new project
multimech-newproject.exe userlist
This will create 2 folders a) results b) test_scripts and the file c) config.cfg. The v_user.py needs to be updated as required

To run the script
multimech-run.exe userlist

The details of the response times for the query is shown below .

All_Transactions_response_times_intervals

 

More details on latency and throughput for the queries and the deletes are included in the results folder of multi-mechanize    

6. Autoscaling The details of the auto-scaling service is shown below

a. Scaling Metric Statistics

22

b. Scaling history 23

7. Monitoring and Analytics (M & A) The output from M & A is shown  below

 

a. Performance Monitoring 24  

b. Log Analysis output The log analysis give a detailed report on the calls made to the app, the console log output and other useful information

25

The series of the 3 posts Bend it like Bluemix, MongoDB with auto-scaling demonstrated the ability of the cloud to expand and shrink based on the load on the cloud.An important requirement for Cloud Architects is design applications that can  scale horizontally without impacting the performance while keeping the costs optimum. The real challenge to auto-scaling is the need to make the application really distributed as opposed to the monolithic architectures we are generally used to.   I hope to write another post on creating simple distributed application later.

Hasta la Vista!

Also see
1.  Bend it like Bluemix, MongoDB with autoscaling – Part 1
2. Bend it like Bluemix, MongoDB with autoscaling – Part 2

You may like :
a) Latency, throughput implications for the cloud
b) The many faces of latency
c) Brewing a potion with Bluemix, PostgreSQL & Node.js in the cloud
d)  A Bluemix recipe with MongoDB and Node.js
e)Spicing up IBM Bluemix with MongoDB and NodeExpress
f) Rock N’ Roll with Bluemix, Cloudant & NodeExpress

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Bend it like Bluemix, MongoDB using Auto-scaling – Part 2!


This post takes off from my previous post Bend it like Bluemix, MongoDB using Auto-scale –  Part 1! In this post I generate traffic using Multi-Mechanize a performance test framework and check out the auto-scaling on Bluemix, besides also doing some rudimentary check on the latency and throughput for this test application. In this particular post I generate concurrent threads which insert documents into MongoDB.

Note: As mentioned in my earlier post this is more of a prototype and the typical situation when architecting cloud applications. Clearly I have not optimized my cloud app (bluemixMongo) for maximum efficiency. Also this a simple 2 tier application with a rudimentary Web interface and a NoSQL DB at This is more of a Proof of Concept (PoC) for the auto-scaling service on Bluemix.

As earlier mentioned the bluemixMongo app is a modification of my earlier post Spicing up a IBM Bluemix cloud app with MongoDB and NodeExpress. The bluemixMongo cloud app that was used for this auto-scaling test can be forked from Devops at bluemixMongo or from GitHib at bluemix-mongo-autoscale. The Multi-mechanize config file, scripts and results can be found at GitHub in multi-mechanize

The document to be inserted into MongoDB consists of 3 fields – Firstname, Lastname and Mobile. To simulate the insertion of records into MongoDB I created a Multi-Mechanize script that will generate random combination of letters and numbers for the First and Last names and a random 9 digit number for the mobile. The code for this script is shown below

1. The snippet below measure the latency for loading the ‘New User’ page

v_user.py
def run(self):
# create a Browser instance
br = mechanize.Browser()
# don"t bother with robots.txt
br.set_handle_robots(False)
print("Rendering new user")
br.addheaders = [("User-agent", "Mozilla/5.0Compatible")]
# start the timer
start_timer = time.time()
# submit the request
resp = br.open("http://bluemixmongo.mybluemix.net/newuser")
#resp = br.open("http://localhost:3000/newuser")
resp.read()
# stop the timer
latency = time.time() - start_timer
# store the custom timer
self.custom_timers["Load Add User Page"] = latency
# think-time
time.sleep(2)

The script also measures the time taken to submit the form containing the Firstname, Lastname and Mobile

# select first (zero-based) form on page
br.select_form(nr=0)
# Create random Firstname
a = (''.join(random.choice(string.ascii_uppercase) for i in range(5)))
b = (''.join(random.choice(string.digits) for i in range(5)))
firstname = a + b
# Create random Lastname
a = (''.join(random.choice(string.ascii_uppercase) for i in range(5)))
b = (''.join(random.choice(string.digits) for i in range(5)))
lastname = a + b
# Create a random mobile number
mobile = (''.join(random.choice(string.digits) for i in range(9)))
# set form field
br.form["firstname"] = firstname
br.form["lastname"] = lastname
br.form["mobile"] = mobile
# start the timer
start_timer = time.time()
# submit the form
resp = br.submit()
print("Submitted.")
resp.read()
# stop the timer
latency = time.time() - start_timer
# store the custom timer
self.custom_timers["Add User"] = latency

2. The config.cfg file is setup to generate 2 asynchronous thread pools of 10 threads for about 400 seconds

config.cfg
run_time = 400
rampup = 0
results_ts_interval = 10
progress_bar = on
console_logging = off
xml_report = off
[user_group-1]
threads = 10
script = v_user.py
[user_group-2]
threads = 10
script = v_user.py

3. The code to add a new user in the app (adduser.js) uses the ‘async’ Node module to enforce sequential processing.

adduser.js
async.series([
function(callback)
{
collection = db.collection('phonebook', function(error, response) {
if( error ) {
return; // Return immediately
}
else {
console.log("Connected to phonebook");
}
});
callback(null, 'one');
},
function(callback)
// Insert the record into the DB
collection.insert({
"FirstName" : FirstName,
"LastName" : LastName,
"Mobile" : Mobile
}, function (err, doc) {
if (err) {
// If it failed, return error
res.send("There was a problem adding the information to the database.");
}
else {
// If it worked, redirect to userlist - Display users
res.location("userlist");
// And forward to success page
res.redirect("userlist")
}
});
collection.find().toArray(function(err, items) {
console.log("**************************>>>>>>>Length =" + items.length);
db.close(); // Make sure that the open DB connection is close
});
callback(null, 'two');
}
]);

4. To checkout auto-scaling the instance memory was kept at 128 MB. Also the scale-up policy was memory based and based on the memory of the instance exceeding 55% of 128 MB for 120 secs. The scale up based on CPU utilization was to happen when the utilization exceed 80% for 300 secs.

6

5. Check the auto-scaling policy

7

6. Initially as seen there is just a single instance

9

7. At around 48% of the script with around 623 transactions the instance is increased by 1. Note that the available memory is decreased by 640 MB – 128 MB = 512 MB.

10

8. At around 1324 transactions another instance is added

Note: Bear in mind

a) The memory threshold was artificially brought down to 55% of 128 MB.b) The app itself is not optimized for maximum efficiency

12

9. The Metric Statistics tab for the Autoscaling service shows this memory breach and the trigger for autoscaling

13

10. The Scaling history Tab for the Auto-scaling service displays the scale-up and scale-down and the policy rules based on which the scaling happened

14

11. If you go to the results folder for the Multi-mechanize tool the response and throughput are captured.

The multi-mechanize commands are executed as follows
To create a new project
multimech-newproject.exe adduser
This will create 2 folders a) results b) test_scripts and the file c) config.cfg. The v_user.py needs to be updated as required

To run the script
multimech-run.exe adduser

12.The results are shown below

a) Load Add User page (Latency)

Load Add User Page_response_times_intervals

b) Load Add User (Throughput)

Load Add User Page_throughput

c)Load Add User (Latency)

Add User_response_times_intervals

d) Load Add User (Throughput)

Add User_throughput

The detailed results can be seen at GitHub at multi-mechanize

13. Check the Monitoring and Analytics Page

a) Availability

16

b) Performance monitoring

15

So once the auto-scaling happens the application can be fine-tuned and for performance. Obviously one could do it the other way around too.

As can be seen adding NoSQL Databases like MongoDB, Redis, Cloudant DB etc. Setting up the auto-scaling policy is also painless as seen above.

Of course the real challenge in cloud applications is to make them distributed and scalable while keeping the applications themselves lean and mean!

See also

Also see
1.  Bend it like Bluemix, MongoDB with autoscaling – Part 1
3. Bend it like Bluemix, MongoDB with autoscaling – Part 3

You may like :
a) Latency, throughput implications for the cloud
b) The many faces of latency
c) Brewing a potion with Bluemix, PostgreSQL & Node.js in the cloud
d)  A Bluemix recipe with MongoDB and Node.js
e)Spicing up IBM Bluemix with MongoDB and NodeExpress
f) Rock N’ Roll with Bluemix, Cloudant & NodeExpress

a) Latency, throughput implications for the cloud

b) The many faces of latency

c) Design principles of scalable, distributed systems

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Bend it like Bluemix, MongoDB using Auto-scale – Part 1!


In the next series of posts I turn on the heat on my cloud deployment in IBM Bluemix and check out the elastic nature of this PaaS offering. Handling traffic load and elastically expanding and contracting is what the cloud does best. This is  where the ‘rubber really meets the road”. In this series of posts I generate the traffic load using Multi –Mechanize a performance test framework created by Corey Goldberg.

This post is based on an earlier cloud app that I created on Bluemix namely Spicing up a IBM Bluemix Cloud app with MongoDB and NodeExpress. I had to make changes to this code to iron out issues while handling concurrent  inserts, displays and deletes issued from the multi-mechanize tool and also to manage the asynchronous nightmare of Nodejs.

The code for this Bluemix, MongoDB with Auto-scaling can be forked  from Devops at bluemixMongo. The code can also be cloned from GitHub at bluemix-mongo-autoscale

1.  To get started, fork the code from Devops at bluemixMongo. Then change the host name in manifest.yml to something unique and click the Build and Deploy button on the top right in the page.

26

1a.  Alternatively the code can be cloned from GitHub at bluemix-mongo-autoscale. From the directory where the code is cloned push the code using Cloud Foundry’s cf command as follows

cf login -a https://api.ng.bluemix.net

cf push bluemixMongo –p . –m 128M

2. Now add the MongoDB service and click ‘OK’ to restage the server.

3

3. Add the Monitoring and Analytics (M & A) and also the Auto-scaling service. The M& A gives a good report on the Availability, Performance logging, and also provides Logging Analysis. The Auto-scaling service is the service that allows the app to expand elastically to changing traffic loads.

4

4. You should see the bluemixMongo app running with 3 services MongoDB, Autoscaling and M&A

5

5. You should now be able click the bluemixMongo.mybluemix.net and check the application out.

6.Now you configure the Overload Policy (auto scaling) policy. This is a slightly contrived example and the scaling policy is set to scale up if the Memory exceeds 55%. (Typically the scale up would be configured for > 80% memory usage)

6

7. Now check the configured Auto-scaling policy

7

8. Change the Memory Quota as appropriate. In my case I have kept the memory quota as 128 MB. Note the available memory is 640 MB and hence allows up to 5 instances. (By the way it is also possible to set any other value like 100 MB).

5

9. Click the Monitoring and Analytics service and take a look at the output in the different tabs

8

10. Next you need to set up the Performance test tool – Multi mechanize. Multi-mechanize creates concurrent threads to generate the load on a Web site or service. It is based on Python which  makes it easy to modify the scripts for hitting a website, making a REST call or submitting a form.

To setup Multi-mechanize you also need additional packages like numpy  matplotlib etc as the tool generates traffic based on a user provided script, measures latency and throughput besides also generating graphs for these.

For a detailed steps for setup of Multi mechanize please follow the steps in Trying out multi-mechanize web performance and load testing framework. Note: I would suggest that you install Python 2.7.2 and not the later 3.x version as some of the packages are based on the 2.7 version which has a slightly different syntax for certain Python statements

In the next post I will run a traffic test on the bluemixMongo application using Multi-mechanize and observe how the cloud app responds to the load.

Watch this space!
Also see
Bend it like Bluemix, MongoDB with autoscaling – Part 2!
Bend it like Bluemix, MongoDB with autoscaling – Part 3

You may like :
a) Latency, throughput implications for the cloud
b) The many faces of latency
c) Brewing a potion with Bluemix, PostgreSQL & Node.js in the cloud
d)  A Bluemix recipe with MongoDB and Node.js
e)Spicing up IBM Bluemix with MongoDB and NodeExpress
f) Rock N’ Roll with Bluemix, Cloudant & NodeExpress

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Where is the Cloud Computing bus going?


delorean_19813Technological innovation patterns have often repeated themselves in history. So it is with Cloud Computing. Familiar patterns of change seem to emerge today

Here are some of main trends that I see in Cloud Computing

Advent of containers: Containers are the new hot topic in cloud computing. In virtualization guest OS’es run separately. Running separate guest OS over the hypervisor is associated with a lot of overhead for each of the heavy weight OS’es. Containers can be used as an alternative to OS-level virtualization to run multiple isolated systems on a single host. Containers within a single operating system are much more efficient being light weight while being able to provide the same level of isolation. Containers run the same kernel as the host. Here is an interesting article on containers Containers, not virtual machines are the future of the cloud.

In many ways this containers over VM innovation pattern is reminiscent of the advantages of lightweight ‘threads’ over the heavy and slow ‘process’ approach in the OS world.  It is inevitable that containers will eventually score over VMs

Open ‘something’ over proprietary’ness: Technology over the decades has always moved into an ‘open’ approach over proprietary solutions. Hence, for example, we have OpenStack for creating instances, provisioning storage, network to do many things that are being done separately by VMWare, Citrix, Hyper-V. The intent is to have a common approach over several disparate approaches. In the networking world there is OpenFlow which tries to have a uniform interface to the many different standards maintained by the Ciscos, Junipers and Brocades of the world.  There are also other technologies like OpenCV (Computer Vision processing), Open VPN (VPN protocol) etc. In all these approaches there is either to move to unify or to provide a layer over and above the disparate approaches.  I am not sure whether Openstack will prevail, only time will tell. I personally think we will move to a level abstraction that will be even above that of Open Stack.

Software Defined Everything: Cloud Computing started with the need to be able to provision computing resources through a user interface or the Web portal. This was made possible, thanks to virtualization. Users could now define and request computing resources. Soon this led to the need for being able to programmatically request storage. The trick in storage is to do ‘thin-provisioning’ or to provision resources that barely satisfies the needs of the application. The application will be able to request more storage programmatically. Not to be outdone, networking followed suit when Software Defined Networking became a reality when Stanford and University of California came with the Open Flow protocol. We have now entered into the era of Software Defined Datacenter. This is a dominant theme in Cloud Computing.

These are some of the predominant trends that are emerging in the Cloud Computing arena.

I have spent more than 2 decades of my career in telecom, implementing telecom protocols, starting in the mid-1980s. The mid 1980s was the time when digital switches started to emerge. This was followed by a spate of protocols and dizzying innovations like mobile telephony, ISDN, Intelligent Networks, Softswitch, UMTS,3G, HSDPA, LTE etc.

I personally think that Cloud computing, to use a very frayed and hackneyed term, is at a similar ‘inflexion point’. Trends are emerging and we will soon be caught in the maelstrom of rapid change and innovation.

In this post I am going to do a Marty McFly of the ‘Back to Future’ trilogy. I am going to set the clock of the Delorean DMC-12 to 2020 and ‘Whoosh…..’

21 Apr 2020:

It is 21 Apr 2020 and a sunny day.  Here is a look at the Cloud Computing landscape

  • The Organization of Cloud Computing Standards (OCCS) now sets and governs the standards for all Cloud Providers of the world
  • Common APIs govern provisioning of instances on the cloud regardless of the Cloud Provider. Instances are defined by RPE values, RAM and IOPS, LB, DNS requirements
  • Networking bandwidth, security and storage are also standards based
  • Enterprises use a ‘diffuse deployment’ strategy where the organization’s workloads are deployed to multiple cloud providers.
  • Workloads are Cloud Provider agnostic.
  • Enterprise applications themselves may span multiple cloud providers for e.g. the e-commerce in Cloud Provider 1, Analytics on HPC instances on Cloud Provider 2 and secure applications on Private Cloud of Cloud Provider 3. Appropriate contracts are maintained between the Cloud Providers for charging for the usage.
  • Algorithms are used by enterprises to deploy workloads to cloud providers. The algorithms match the SLA and cost requirements of the application with those offered by the cloud provider to minimize the cost while meeting the SLA requirements of the applications.
  • Compute, storage and networking costs fluctuate and enterprises use algorithms to optimize the deployment of workloads. Workloads are migrated to take advantage of these price changes
  • Consolidation and acquisitions happen at an alarming pace. Cloud providers, storage, network and HPC providers aslo compete fiercely
  • Cloud providers are swallowed by others and some lose out. The battle scene is bloody

Time to get back to Delorean. This time the clock on Delorean is set to 2025

18 Sep 2025

Today it is 18 Sep 2025, and it is sunny again, coincidentally.

  • Cloud Computing is dead, mate. These days technology has moved to ‘Cloud Computing in a box’.
  • The technology of these times are ‘Haze works’ where the computation happens in the stratosphere over the ether …

So much for looking into the future. It is now time to get back to the reality of VMs

Revisiting Whats up, Watson – Using Watson’s Question and Answer with Bluemix – Part 2


In this I revisit the Bluemix app based on Watson’s Question and Answer service which I had posted in my earlier article “Whats up Watson? Using IBM Watson with Bluemix, NodeExpress – Part 1“. In this post I removed some redundant code and also added some additional checks to the Jade templates to handle responses to “focusless”  questions viz. Am I…? or “Is X contagious?”

You can run the app at Whatsup Watson?

The code can be forked and cloned from Devops at Whatsup

The code is also available at GitHub at Whatsup

The section below briefly describes the details of the implementation of the WhatsupWatson app

A) app.js

In the app.js module the VCAP environment is parsed to get the credentials to use the Watson Question and Answer service as shown below

if (process.env.VCAP_SERVICES) {
  var VCAP_SERVICES = JSON.parse(process.env.VCAP_SERVICES);
  // retrieve the credential information from VCAP_SERVICES for Watson QAAPI
  hostname   = VCAP_SERVICES["question_and_answer"][0].name;               
  passwd = VCAP_SERVICES["question_and_answer"][0].credentials.password; 
  username = VCAP_SERVICES["question_and_answer"][0].credentials.username; 
  watson_url = VCAP_SERVICES["question_and_answer"][0].credentials.url;
}

There different ways of asking Watson questions. Watson’s response will vary depending on the options and parameters that are used to POST the question to Watson. This app uses a route for each ‘question type’ and option. These are

a. Simple Synchronous Query: Post a simple synchronous query to Watson

This is the simplest query that we can pose to Watson. Here we need to just include the text of the question and the also a Sync Timeout. The Sync Timeout denotes the time client will wait for responses from the Watson service

// Ask Watson a simple synchronous query
app.get('/question',question.list);
app.post('/simplesync',simplesync.list);

b. Evidence based question: Ask Watson to respond to evidence given to it

Ask Watson for responses based on evidence given like medical conditions etc.

// Ask Watson for responses based on evidence provided
app.get('/evidence',evidence.list);
app.post('/evidencereq',evidencereq.list);

c. Request for a specified set of answers to a question: Ask Watson to give a specified number of responses to a question

// Ask Watson to provide specified number of responses to a query
app.get('/items',items.list);
app.post('/itemsreq',itemsreq.list);

d. Get a formatted response to a question: Ask Watson to format the response to the question

// Get a formatted response from Watson for a query
app.get('/format',format.list);
app.post('/formatreq',formatreq.list);

To get started with Watson we would need to connect the Bluemix app to the Watson’s QAAPI as a service by parsing the environment variable. This is shown below

B) simplesync.js


The code in simplesync.js, evidencereq.js, itemsreq.js,formatreq.js are similar. The modules construct the question in the format required. The details of the implementation of simplesync.js is included below a. The Watson’s corpus will be set to ‘healthcare’

parts = url.parse(watson_url +'/v1/question/healthcare');

b. The POST headers are set

// Set the required headers for posting the REST query to Watson
headers = {'Content-Type'  :'application/json',
                  'X-synctimeout' : syncTimeout,
                  'Authorization' : "Basic " + new Buffer(username+":"+passwd).toString("base64")};

c. The POST request options are set

// Create the request options to POST our question to Watson
var options = {host: parts.hostname,
port: 443,
path: parts.pathname,
method: 'POST',
headers: headers,
rejectUnauthorized: false, // ignore certificates
requestCert: true,
agent: false};

The question that is to be asked of Watson needs to be formatted appropriately based on the input received in the appropriate form (for e.g. simplesync.jade)

// Get the values from the form
var syncTimeout = req.body.timeout;
var query = req.body.query;
// create the Question text to ask Watson
var question = {question : {questionText :query }};
var evidence = {"evidenceRequest":{"items":1,"profile":"yes"}};
// Set the POST body and send to Watson
req.write(JSON.stringify(question));
req.write("\n\n");
req.end();

Now you POST the Question to Watson and receive the stream of response using Node.js’ .on(‘data’,) & .on(‘end’) shown below

var req = https.request(options, function(result) {
result.setEncoding('utf-8');
// Retrieve and return the result back to the client
result.on(“data”, function(chunk) {
output += chunk;
});

result.on('end', function(chunk) {		  
           var answers = JSON.parse(output);
			      results = answers[0];
			      res.render(
					 'answer', {
                      "results":results
                                        
			   });
			
});

The results are parsed and formatted displayed using Jade. For the Jade templates I have used a combination of Jade and in-line HTML tags.

Included below is the part of the jade template with in-line HTML tagging

c) answer.jade

mplementation details of WhatsupWatsonapp

The section below briefly describes the details of the implementation of the WhatsupWatson app

A) app.js

In the app.js module the VCAP environment is parsed to get the credentials to use the Watson Question and Answer service as shown below

if (process.env.VCAP_SERVICES) {
  var VCAP_SERVICES = JSON.parse(process.env.VCAP_SERVICES);
  // retrieve the credential information from VCAP_SERVICES for Watson QAAPI
  hostname   = VCAP_SERVICES["question_and_answer"][0].name;               
  passwd = VCAP_SERVICES["question_and_answer"][0].credentials.password; 
  username = VCAP_SERVICES["question_and_answer"][0].credentials.username; 
  watson_url = VCAP_SERVICES["question_and_answer"][0].credentials.url;
}

There different ways of asking Watson questions. Watson’s response will vary depending on the options and parameters that are used to POST the question to Watson. This app uses a route for each ‘question type’ and option. These are

a. Simple Synchronous Query: Post a simple synchronous query to Watson

This is the simplest query that we can pose to Watson. Here we need to just include the text of the question and the also a Sync Timeout. The Sync Timeout denotes the time client will wait for responses from the Watson service

// Ask Watson a simple synchronous query
app.get('/question',question.list);
app.post('/simplesync',simplesync.list);

b. Evidence based question: Ask Watson to respond to evidence given to it

Ask Watson for responses based on evidence given like medical conditions etc.

// Ask Watson for responses based on evidence provided
app.get('/evidence',evidence.list);
app.post('/evidencereq',evidencereq.list);

c. Request for a specified set of answers to a question: Ask Watson to give a specified number of responses to a question

// Ask Watson to provide specified number of responses to a query
app.get('/items',items.list);
app.post('/itemsreq',itemsreq.list);

d. Get a formatted response to a question: Ask Watson to format the response to the question

// Get a formatted response from Watson for a query
app.get('/format',format.list);
app.post('/formatreq',formatreq.list);

To get started with Watson we would need to connect the Bluemix app to the Watson’s QAAPI as a service by parsing the environment variable. This is shown below

B) simplesync.js


The code in simplesync.js, evidencereq.js, itemsreq.js,formatreq.js are similar. The modules construct the question in the format required. The details of the implementation of simplesync.js is included below a. The Watson’s corpus will be set to ‘healthcare’

parts = url.parse(watson_url +'/v1/question/healthcare');

b. The POST headers are set

// Set the required headers for posting the REST query to Watson
headers = {'Content-Type'  :'application/json',
                  'X-synctimeout' : syncTimeout,
                  'Authorization' : "Basic " + new Buffer(username+":"+passwd).toString("base64")};

c. The POST request options are set

// Create the request options to POST our question to Watson
var options = {host: parts.hostname,
port: 443,
path: parts.pathname,
method: 'POST',
headers: headers,
rejectUnauthorized: false, // ignore certificates
requestCert: true,
agent: false};

The question that is to be asked of Watson needs to be formatted appropriately based on the input received in the appropriate form (for e.g. simplesync.jade)

// Get the values from the form
var syncTimeout = req.body.timeout;
var query = req.body.query;
// create the Question text to ask Watson
var question = {question : {questionText :query }};
var evidence = {"evidenceRequest":{"items":1,"profile":"yes"}};
// Set the POST body and send to Watson
req.write(JSON.stringify(question));
req.write("\n\n");
req.end();

Now you POST the Question to Watson and receive the stream of response using Node.js’ .on(‘data’,) & .on(‘end’) shown below

var req = https.request(options, function(result) {
result.setEncoding('utf-8');
// Retrieve and return the result back to the client
result.on(“data”, function(chunk) {
output += chunk;
});

result.on('end', function(chunk) {		  
           var answers = JSON.parse(output);
			      results = answers[0];
			      res.render(
					 'answer', {
                      "results":results
                                        
			   });
			
});

The results are parsed and formatted displayed using Jade. For the Jade templates I have used a combination of Jade and in-line HTML tags.

Included below is the part of the jade template with in-line HTML tagging

c) answer.jade

if results.question.qclasslist
    for result in results.question.qclasslist
      p <font color="blueviolet">  Value   = <font color="black "> #{result.value} </font> 
  if results.question.focuslist
    p <font color="blueviolet">  Focuslist  </font> = <font color="black "> #{results.question.focuslist[0].value} </font>
  if latlist
    p <font color="blueviolet">  Latlist  </font> = <font color="black "> #{results.question.latlist[0].value} </font>

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Find me on Google+

What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1


Published in IBM developerWorks ‘Whats up Watson? Using Watson QAAPI with Bluemix and NodeExpress

In this post I take the famed IBM Watson through the paces (yes, that’s right!, this post is about  using the same  IBM  Watson which trounced 2 human Jeopardy titans in a classic duel in 2011).  IBM’s Watson (see  What is Watson?) is capable of understanding the nuances of the English language and heralds a new era in the domain of cognitive computing. IBM Bluemix now includes 8 services from Watson ranging from Concept Expansion, Language Identification, Machine Translation, Question-Answer etc. For more information on Watson’s QAAPI and the many services that have been included in Bluemix please see Watson Services.

In this article I create an application on IBM Bluemix and use Watson’s QAAPI (Question-Answer API) as a service to the Bluemix application. For the application I have used NodeExpress to create a Webserver and post the REST queries to Watson.  Jade is used format the results of Watson’s Response.

In this current release of Bluemix Watson comes with a corpus of medical facts. In other words Watson has been made to ingest medical documents in multiple formats (doc, pdf, html, text  etc) and the user can pose medical questions to Dr.Watson. In its current avatar, its medical diet consisted of dishes from (CDC Health Topics, National Heart, Lung, and Blood Institute (NHLBI) National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Institute of Neurological Disorders and Stroke (NINDS), Cancer.gov (physician data query) etc.)

Try out my Watson app  on Bluemix here –  Whats up Watson?

To get down to Watson QAAPI business with Bluemix app you can fork the code from Devops at whatsup. This can then be downloaded to your local machine. You can also clone the code from GitHub at whatsup

  1. To get started go to the directory where you have cloned the code for Whatsup app

2.Push the app to Bluemix using Cloud Foundry’s ‘cf’ commands as shown below

cf login -a https://api.ng.bluemix.net

3. Next push the app to Bluemix
cf push whatsup –p . –m 512M

In the Bluemix dashboard you should see ‘whatsup’ app running. Now click ‘Add Service’ and under Watson add ‘Question Answer’

1

Add Qatson QAAPI

2

You will be prompted with ‘Restage Application’. Click ‘Ok’. Once you have the app running you should be able to get started with Doc Watson.

The code for this Bluemix app with QAAPI as a Service is based on the following article Examples using the Question and Answer API

  1. Here’s a look at the code for the Bluemix & Watson app.

In this Bluemix app I show the different types of Questions we can ask Watson and the responses we get from it. The app has a route for each of the different types of questions and options

a. Simple Synchronous Query: Post a simple synchronous query to Watson
This is the simplest query that we can pose to Watson. Here we need to just include the text of the question and the also a Sync Timeout. The Sync Timeout denotes the time client will wait for responses from the Watson service
// Ask Watson a simple synchronous query

app.get('/question',question.list);
app.post('/simplesync',simplesync.list);

b. Evidence based question: Ask Watson to respond to evidence given to it
Ask Watson for responses based on evidence given like medical conditions etc. This would be a used for diagnostic purposes I would presume.
// Ask Watson for responses based on evidence provided
app.get('/evidence',evidence.list);
app.post('/evidencereq',evidencereq.list);

c. Request for a specified set of answers to a question: Ask Dr. Watson to give a specified number of responses to a question
// Ask Watson to provide specified number of responses to a query
app.get('/items',items.list);
app.post('/itemsreq',itemsreq.list);

d. Get a formatted response to a question: Ask Dr. Watson to format the response to the question
// Get a formatted response from Watson for a query
app.get('/format',format.list);
app.post('/formatreq',formatreq.list);

  1. To get started with Watson we would need to connect the Bluemix app to the Watson’s QAAPI as a service by parsing the environment variable. This is shown below

//Get the VCAP environment variables to connect Watson service to the Bluemix application

question.js
o o o
if (process.env.VCAP_SERVICES) {
var VCAP_SERVICES = JSON.parse(process.env.VCAP_SERVICES);
// retrieve the credential information from VCAP_SERVICES for Watson QAAPI
var hostname   = VCAP_SERVICES["Watson QAAPI-0.1"][0].name;
var passwd = VCAP_SERVICES["Watson QAAPI-0.1"][0].credentials.password;
var userid = VCAP_SERVICES["Watson QAAPI-0.1"][0].credentials.userid;
var watson_url = VCAP_SERVICES["Watson QAAPI-0.1"][0].credentials.url;

Next we need to format the header for the POST request

var parts = url.parse(watson_url);
// Create the request options to POST our question to Watson
var options = {host: parts.hostname,
port: 443,
path: parts.pathname,
method: 'POST',
headers: headers,
rejectUnauthorized: false, // ignore certificates
requestCert: true,
agent: false};

The question that is to be asked of Watson needs to be formatted appropriately based on the input received in the appropriate form (for e.g. simplesync.jade)

question.js
// Get the values from the form
var syncTimeout = req.body.timeout;
var query = req.body.query;
// create the Question text to ask Watson
var question = {question : {questionText :query }};
var evidence = {"evidenceRequest":{"items":1,"profile":"yes"}};
// Set the POST body and send to Watson
req.write(JSON.stringify(question));
req.write("\n\n");
req.end();

Now you POST the Question to Dr. Watson and receive the stream of response using Node.js’ .on(‘data’,) & .on(‘end’) shown below

question.js
…..
      var req = https.request(options, function(result) {
// Retrieve and return the result back to the client
result.on(“data”, function(chunk) {
output += chunk;
});

result.on('end', function(chunk) {
// Capture Watson's response in output. Parse Watson's answer for the fields
var results = JSON.parse(output);
res.render(
'answer', {
"results":results
});
});
});

The results are parsed and formatted displayed using Jade. For the Jade templates I have used a combination of Jade and inline HTML tags (Jade can occasionally be very stubborn and make you sweat quite a bit. So I took the easier route of inline HTML tagging. In a later post I will try out CSS stylesheets to format the response.)

Included below is the part of the jade template with inline HTML tagging

Answer.jade
o o o
<h2 style="color:blueviolet">  Question Details </style> </h2>
for result in results.question.qclasslist
p <font color="blueviolet">  Value   = <font color="black "> #{result.value} </font>
p <font color="blueviolet">  Focuslist  </font> = <font color="black "> #{results.question.focuslist[0].value} </font>
// The 'How' query's response does not include latlist. Hence conditional added.
if latlist
p <font color="blueviolet">  Latlist  </font> = <font color="black "> #{results.question.latlist[0].value} </font>

o o o

Now that the code is all set you can fire the Watson. To do this click on the route

3

Click the route whatsup.mybluemix.net and ‘Lo and behold’ you should see Watson ready and raring to go.

4

As the display shows there are 4 different Question-Answer options that there is for Watson QAAPI

Simple Synchronous Question-Answer
This option is the simplest option. Here we need to just include the text of the question and the also a Sync Timeout. The question can be any medical related question as Watson in its current Bluemix avatar has a medical corpus

For e.g.1) What is carotid artery disease?

2) What is the difference between hepatitis A and hepatitis B etc.

The Sync Timeout parameter specifies the number of seconds the QAAPI client will wait for the streaming response from Watson. An example question and Watson’s response are included below
5
;

When we click Submit Watson spews out the following response

6

Evidence based response:

In this mode of operation, questions can be posed to Watson based on observed evidence. Watson will output all relevant information based on the evidence provided. As seen in the output Watson provides a “confidence factor” for each of its response

7

Watson gives response with appropriate confidence values based on the given evidence

8

Question with specified number of responses
In this option we can ask Watson to provide us with at least ‘n’ items in its response. If it cannot provide as many items it will give an error notification

11

This will bring up the following screen where the question asked is “What is the treatment for Down’s syndrome?” and Items as 3.
9

Watson gives 3 items in the response as shown below
10

Formatted Response: Here Watson gives a formatted response to question asked. Since I had already formatted the response using Jade it does not do extra formatting as seen in the screen shot.
12

Updated synonym based response. In this response we can change the synonym list based on which Watson will search its medical corpus and modify its response. The synonym list for the the question “What is fever?” is shown below. We can turn off synonyms by setting to ‘false’ and possibly adding other synonyms for the search
13

This part of the code has not been included in this post and is left as an exercise to the reader 🙂

As mentioned before you can fork and clone the code from IBM devops at whatsup or clone from GitHub at whatsup

There are many sections to Watson’s answer which cannot be included in this post as the amount of information is large and really needs to be pared to pick out important details. I am including small sections from each part of Watson’s response below to the question “How is carotid artery disease treated/”

I will follow up this post with another post where I will take a closer look at Watson’s response which has many parts to it
namely

– Question Details
14

– Evidence list
15

– Synonym list
16

– Answers

17

– Error notifications
18

There you have it.  Go ahead and get all your medical questions answered by Watson.

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Other posts on Bluemix

1. Brewing a potion with Bluemix, PostgreSQL & Node.js in the cloud
2. A Bluemix recipe with MongoDB and Node.js
3. A Cloud Medley with IBM’s Bluemix, Cloudant and Node.js
4.  Bend it like Bluemix, MongoDB with autoscaling – Part 1

You may also find the following interesting
1. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
2. Informed choices through Machine Learning-2: Pitting together Kumble, Kapil,
3. A crime map of India in R: Crimes against women
4. Analyzing cricket’s batting legends – Through the mirage with R


Find me on Google+