Quantcast
Channel: Data-driven modeling
Viewing all articles
Browse latest Browse all 10

Lecture 03

$
0
0

With document classification (e.g., spam filtering) as a motivating example, we reviewed issues with kNN for high-dimensional classification problems—namely the curse of dimensionality—and explored linear methods as an alternative. We first reviewed Bayes’ rule for inverting conditional probabilities via a simple, but perhaps counterintuitive, medical diagnosis example and then adapted this to an (extremely naive) one-word spam classifier. We improved upon this by considering all words present in a document and arrived at naive Bayes—a simple linear method for classification in which we model each word occurence independently and use Bayes’ rule to calculate the probability the document belongs to each class given the words it contains. Chapter 6 of Segaran covers related material and additional references are provided below. Historical references include Horvitz, et. al., 1998 and Graham, 2002.

Some notes on naive Bayes:

  • Naive Bayes is in some ways the extreme opposite of kNN—whereas kNN estimates a piecewise constant density from the training examples, naive Bayes provides a globally linear estimate where all weights are estimated independently. Fitting to this constrained parametric form allows us learn efficiently in high-dimensional feature spaces (at the risk of assuming an overly simple model of the data).
  • For simplicity we looked at a Bernoulli model of word occurrence in documents. That is, we represented each document by a binary vector over all words and modeled the probability of each word occurring in a document as an independent coin flip. There are, of course, many other document representations one might consider, along with their corresponding likelihood models. Choosing Which Naive Bayes to use is largely an empirical matter.
  • Training a naive Bayes classifier is computationally cheap and scalable. Using a frequentist estimate for the Bernoulli model, this amounts to simply counting the number of documents of each class containing each word in our dictionary. It’s also easy to update the model upon receiving new training data by simply incrementing these counts.
  • Likewise, making predictions with a trained naive Bayes model is also computationally cheap, although this perhaps isn’t immediately obvious. With a simple manipulation of the posterior class probabilities, we saw that the log-odds are linear and sparse in the document features. Details are in the slides as well as David Lewis’ review of Naive Bayes at Forty.
  • The name “naive Bayes” is a bit of misnomer. While the independence assumption is certainly untrue, in practice naive Bayes works reasonably well and turns out be Not So Stupid After All. Likewise, there’s nothing necessarily Bayesian about simply using Bayes’ rule—that is, there’s no point at which one must take a subjective view of probabilities to employ naive Bayes.
  • This being said, simple frequentist estimates of model parameters can often result in overfitting. For example, in the Enron email data we looked at, there aren’t any spam messages containing the company’s name. Thus, a simple frequentist estimate might conclude that it’s impossible for the word “Enron” to occur in future spam, resulting in an easily gamed classifier. Smoothing estimates by adding pseudocounts to observed word occurrences (e.g., pretending each word occurred at least once in each document class) is one potential solution to this problem.

Next week we’ll see that one way to justify this last point is by considering maximum a posteriori inference for model parameters as an extension of usual maximum likelihood estimates.

In the latter part of class we looked at a Bash shell script for classifying spam in the Enron email data set using a single word. We used grep (grep tutorial; regular expression cheatsheet) and wc, to count the number of ham and spam messages in which a given word appears, and then implemented Bayes’ rule with bc to calculate the probability that a new message containing that word is spam. The full script is available on github. We concluded with an introduction to awk for processing delimited text files. See some famous awk one-liners and their explanations for reference. And if you’re feeling adventurous, see this implementation of naive Bayes in awk.


Viewing all articles
Browse latest Browse all 10

Trending Articles