LLR (Log-Likelihood Ratio) used for recommendations

The blog is aimed at a brief explanation of LLR. For more details please refer the original paper at http://ucrel.lancs.ac.uk/papers/tedstats.pdf. Another great resource is http://tdunning.blogspot.in/2008/03/surprise-and-coincidence.html from which much of the inspiration for this blog is drawn.

The approach is centered around analyzing counts of co-occurrence of events. Concretely, lets consider two events A and B. Let event A describe the view of an iPhone. Let event B describe viewing an iPad. Now, the problem we are trying to solve is as follows. To what extent does one indicate the likelihood of the occurrence of the other. So, if the user has viewed an iPhone, how likely is he to view an iPad?

Mathematical Treatment

Mathematically, it can be described as follows. To compute the score, we define a few counts. Let k_11 be the number of times the events occurred together, let k_12 and k_21 be the number of times each has occurred without the other and k_22 be the number of times something has been observed that was neither of these events. We get the following table:

Event A Everything but A
Event B A and B together (k_11) B, but not A (k_12)
Everything but B A without B (k_21) Neither A nor B (k_22)

The LLR score is given as follows:

LLR = 2 sum(k) (H(k) – H(rowSums(k)) – H(colSums(k)))

where H is Shannon’s entropy, computed as the sum of (k_ij / sum(k)) log (k_ij / sum(k)). In R, this function is defined as

H = function(k) {N = sum(k) ; return (sum(k/N * log(k/N + (k==0)))}.

credit for the illustration: http://tdunning.blogspot.in/2008/03/surprise-and-coincidence.html

An E-Commerce example

Now, lets take a concrete example. Say, we are designing a recommendation system for an e-commerce website. We analyse the user click stream data and compute LLR between items. Say we get the following results:-

Item
View-View LLR Scores
iPhone iPad_12.31, iPod Touch_10.11, MacbookPro_8.44, …
iPad iPadPro_16.11, …
Galaxy Nexus Samsung Note_10.12, iPad_6.11 …

So, what exactly does this table represent. Lets consider the first entry in the first row. What this means is that users who ‘view’ iPad are likely to view ‘iPhone’ as well. The quantitative measure of this likeness is established by the LLR scores. So, in this case the value is 12.31.

Ok, that’s well and good. But, how exactly does this serve recommendations? Let us consider a user, say John. John has come to the website and has just clicked on the link for iPad. Now, the problem that we are trying to solve is this. Given that John is currently viewing an iPad. What items should we recommend to him? LLR provides a ready answer.

Concretely, in the example above, we would query the view-view LLR column. The item iPad appears in two rows (row 1 and row 3). In row 1 it appears with a similarity score of 12.31. In row 3 it appears with a similarity score of 6.11. Now, we recommend to the user an iPhone and a Galaxy Nexus. The order being determined by their similarity scores.

Lets extend our example to make the value more apparent. Say, now John goes and looks at a Samsung Note. So, we now know two items that John has viewed. The question we are trying to answer is the following. Given that John has viewed an iPad and a Samsung Note, what items should we recommend to him? Again, LLR to the rescue.

In the most simple approach, we would derive recommendations from both of these items. We would then merge these recommendations using their similarity scores and suitable weights. Concretely, we have the following:-

recommendations because he viewed iPad – [ (iPhone,12.31), (Galaxy Nexus, 6.11) ]

recommendations because he viewed Samsung Note – [ (Galaxy Nexus, 10.12) ]

Now, we simply combine the above results to return Galaxy Nexus followed by iPhone in the list of recommendations. This illustrates the beauty of this approach. A product such as Galaxy Nexus would be ranked higher in recommendations owing to being indicated by multiple items that the user has interacted with.

This drives real time personalization. As the user visits different items, we build up a user history consisting of the items he has viewed/purchased. With each click we gain more knowledge about the user intent and can utilize this knowledge to serve recommendations. Another significant advantage with this approach is that the LLR model can be readily indexed in a search index such as SOLR or Elastic Search. Recommendations can then be reduced to simple search queries.

The view-view computation was simply an example. In reality you could have multiple indicators such as view-bought as well as view-view. In this case, views are indicated by both views and purchases. Then, we can maintain a user history vector for both the items he has viewed as well as purchased. This can then be used to query the LLR table and serve recommendations. Similarly, user searches can be an indicator. For instance, you could have a view-search_term table. For example, this table may look as follows:-

Item
View-Search_Term LLR Scores
iPhone apple_10.11, …
iPad tablet_16.11, …
Galaxy Nexus google_10.12, …

Which clearly shows that searching for the term ‘apple’ indicates a strong affinity to viewing an item such as an ‘iPhone’.

An example of how this is done can be found here – http://www.slideshare.net/pferrel/unified-recommender-39986309.

Hmm, Sounds a lot like a simple co-occurrence count

The above approach has significant advantages over a simple co-occurrence count. One of the chief ones being that popular items will not be recommended if they are not indicated by other items. Concretely, say that ‘beer’ is an item that is always purchased. In that case it would have high co-occurrence with each other item. Now, using traditional co-occurrence count measures we would recommend ‘beer’ for every item that is viewed. This approach may indeed prove useful for certain use cases. However, it fails to capture an essential aspect of the problem. That ‘beer’ is co-occurring with, say ‘diapers’ not because ‘diapers’ indicates ‘beer’ but simply because ‘beer’ is a really popular item and just co-occurs with most items in the database. These problems are solved by LLR where it is able to capture significant co-occurrences and hence eliminate such recommendations.

Implementation

Mahout has some great libraries that offer LLR implemented using Spark out of the box. The following link:- https://mahout.apache.org/users/algorithms/intro-cooccurrence-spark.html describes the spark-itemsimilarity command line tool that can be used for computing LLR.

Another fantastic resource is https://github.com/pferrel/template-scala-parallel-universal-recommendation. This is more of an end to end implementation of a recommendation system that uses the above discussed concepts.

For any questions, please post comments and it’d be great to have a discussion on this.

Advertisements