Text Classification with Naïve Bayes

One of the most successful applications of Naïve Bayes has been within the field of Natural Language Processing (NLP). NLP is a field that has been much related to machine learning, since many of its problems can be formulated as a classification task. Usually, NLP problems have important amounts of tagged data in the form of text documents. This data can be used as a training dataset for machine learning algorithms. In this section, we will use Naïve Bayes for text classification; we will have a set of text documents with their corresponding categories, and we will train a Naïve Bayes algorithm to learn to predict the categories of new unseen instances. This simple task has many practical applications; probably the most known and widely used one is spam filtering. In this section we will try to classify newsgroup messages using a dataset that can be retrieved from within scikit-learn. This dataset consists of around 19,000 newsgroup messages from 20 different topics ranging from politics and religion to sports and science

Start by importing numpy, scikit-learn, and pyplot, the Python libraries we will be using in this chapter. Show the versions we will be using (in case you have problems running the notebooks).

In [1]:
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt

print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
Populating the interactive namespace from numpy and matplotlib
IPython version: 4.0.1
numpy version: 1.10.1
scikit-learn version: 0.17
matplotlib version: 1.5.0

Import the newsgroup Dataset, and explore its structure and data (this could take some time, especially if sklearn has to download the 14MB dataset from the Internet)

In [2]:
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')

Let's explore the dataset structure:

In [3]:
news.keys()
Out[3]:
['description', 'DESCR', 'filenames', 'target_names', 'data', 'target']

If we look at the properties of the dataset, we will find that we have the usual ones: DESCR, data, target, and target_names. The difference now is that data holds a list of text contents, instead of a numpy matrix:

In [4]:
print type(news.data), type(news.target), type(news.target_names)
print news.target_names
print len(news.data)
print len(news.target)
<type 'list'> <type 'numpy.ndarray'> <type 'list'>
['alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x', 'misc.forsale', 'rec.autos', 'rec.motorcycles', 'rec.sport.baseball', 'rec.sport.hockey', 'sci.crypt', 'sci.electronics', 'sci.med', 'sci.space', 'soc.religion.christian', 'talk.politics.guns', 'talk.politics.mideast', 'talk.politics.misc', 'talk.religion.misc']
18846
18846

If you look at, say, the first instance, you will see the content of a newsgroup message, and you can get its corresponding category:

In [5]:
print news.data[0]
print news.target[0], news.target_names[news.target[0]]
From: Mamatha Devineni Ratnam <[email protected]>
Subject: Pens fans reactions
Organization: Post Office, Carnegie Mellon, Pittsburgh, PA
Lines: 12
NNTP-Posting-Host: po4.andrew.cmu.edu



I am sure some bashers of Pens fans are pretty confused about the lack
of any kind of posts about the recent Pens massacre of the Devils. Actually,
I am  bit puzzled too and a bit relieved. However, I am going to put an end
to non-PIttsburghers' relief with a bit of praise for the Pens. Man, they
are killing those Devils worse than I thought. Jagr just showed you why
he is much better than his regular season stats. He is also a lot
fo fun to watch in the playoffs. Bowman should let JAgr have a lot of
fun in the next couple of games since the Pens are going to beat the pulp out of Jersey anyway. I was very disappointed not to see the Islanders lose the final
regular season game.          PENS RULE!!!


10 rec.sport.hockey

Let's build the training and testing datasets:

In [6]:
SPLIT_PERC = 0.75
split_size = int(len(news.data)*SPLIT_PERC)
X_train = news.data[:split_size]
X_test = news.data[split_size:]
y_train = news.target[:split_size]
y_test = news.target[split_size:]

This function will serve to perform and evaluate a cross validation:

In [7]:
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem

def evaluate_cross_validation(clf, X, y, K):
    # create a k-fold croos validation iterator of k=5 folds
    cv = KFold(len(y), K, shuffle=True, random_state=0)
    # by default the score used is the one returned by score method of the estimator (accuracy)
    scores = cross_val_score(clf, X, y, cv=cv)
    print scores
    print ("Mean score: {0:.3f} (+/-{1:.3f})").format(
        np.mean(scores), sem(scores))

Our machine learning algorithms can work only on numeric data, so our next step will be to convert our text-based dataset to a numeric dataset. Currently we only have one feature, the text content of the message; we need some function that transforms a text into a meaningful set of numeric features. Intuitively one could try to look at which are the words (or more precisely, tokens, including numbers or punctuation signs) that are used in each of the text categories, and try to characterize each category with the frequency distribution of each of those words. The sklearn. feature_extraction.text module has some useful utilities to build numeric feature vectors from text documents.

If you look inside the sklearn.feature_extraction.text module, you will find three different classes that can transform text into numeric features: CountVectorizer, HashingVectorizer, and TfidfVectorizer. The difference between them resides in the calculations they perform to obtain the numeric features. CountVectorizer basically creates a dictionary of words from the text corpus. Then, each instance is converted to a vector of numeric features where each element will be the count of the number of times a particular word appears in the document. HashingVectorizer, instead of constricting and maintaining the dictionary in memory, implements a hashing function that maps tokens into feature indexes, and then computes the count as in CountVectorizer. TfidfVectorizer works like the CountVectorizer, but with a more advanced calculation called Term Frequency Inverse Document Frequency (TF-IDF). This is a statistic for measuring the importance of a word in a document or corpus. Intuitively, it looks for words that are more frequent in the current document, compared with their frequency in the whole corpus of documents. You can see this as a way to normalize the results and avoid words that are too frequent, and thus not useful to characterize the instances.

We will create a Naïve Bayes classifier that is composed of a feature vectorizer and the actual Bayes classifier. We will use the MultinomialNB class from the sklearn.naive_bayes module.

In [8]:
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer

clf_1 = Pipeline([
    ('vect', CountVectorizer()),
    ('clf', MultinomialNB()),
])
clf_2 = Pipeline([
    ('vect', HashingVectorizer(non_negative=True)),
    ('clf', MultinomialNB()),
])
clf_3 = Pipeline([
    ('vect', TfidfVectorizer()),
    ('clf', MultinomialNB()),
])
In [ ]:
clfs = [clf_1, clf_2, clf_3]
for clf in clfs:
    evaluate_cross_validation(clf, news.data, news.target, 5)

We will keep the TF-IDF vectorizer but use a different regular expression to pefrom tokenization. The default regular expression: ur"\b\w\w+\b" considers alphanumeric characters and the underscore. Perhaps also considering the slash and the dot could improve the tokenization, and begin considering tokens as Wi-Fi and site.com. The new regular expression could be: ur"\b[a-z0-9-.]+[a-z][a-z0-9-.]+\b". If you have queries about how to define regular expressions, please refer to the Python re module documentation. Let's try our new classifier:

In [10]:
clf_4 = Pipeline([
    ('vect', TfidfVectorizer(
                token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
    )),
    ('clf', MultinomialNB()),
])
In [11]:
evaluate_cross_validation(clf_4, news.data, news.target, 5)
[ 0.86100796  0.8718493   0.86203237  0.87291059  0.8588485 ]
Mean score: 0.865 (+/-0.003)

Another parameter that we can use is stop_words: this argument allows us to pass a list of words we do not want to take into account, such as too frequent words, or words we do not a priori expect to provide information about the particular topic. Let's try to improve performance filtering the stop words:

In [12]:
def get_stop_words():
    result = set()
    for line in open('data/stopwords_en.txt', 'r').readlines():
        result.add(line.strip())
    return result
In [13]:
stop_words = get_stop_words()
print stop_words
set(['all', 'six', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through', 'yourselves', 'fify', 'where', 'mill', 'only', 'find', 'before', 'one', 'whose', 'system', 'how', 'somewhere', 'with', 'thick', 'show', 'had', 'enough', 'should', 'to', 'must', 'whom', 'seeming', 'under', 'ours', 'has', 'might', 'thereafter', 'latterly', 'do', 'them', 'his', 'around', 'than', 'get', 'very', 'de', 'none', 'cannot', 'every', 'whether', 'they', 'front', 'during', 'thus', 'now', 'him', 'nor', 'name', 'several', 'hereafter', 'always', 'who', 'cry', 'whither', 'this', 'someone', 'either', 'each', 'become', 'thereupon', 'sometime', 'side', 'two', 'therein', 'twelve', 'because', 'often', 'ten', 'our', 'eg', 'some', 'back', 'up', 'go', 'namely', 'towards', 'are', 'further', 'beyond', 'ourselves', 'yet', 'out', 'even', 'will', 'what', 'still', 'for', 'bottom', 'mine', 'since', 'please', 'forty', 'per', 'its', 'everything', 'behind', 'un', 'above', 'between', 'it', 'neither', 'seemed', 'ever', 'across', 'she', 'somehow', 'be', 'we', 'full', 'never', 'sixty', 'however', 'here', 'otherwise', 'were', 'whereupon', 'nowhere', 'although', 'found', 'alone', 're', 'along', 'fifteen', 'by', 'both', 'about', 'last', 'would', 'anything', 'via', 'many', 'could', 'thence', 'put', 'against', 'keep', 'etc', 'amount', 'became', 'ltd', 'hence', 'onto', 'or', 'con', 'among', 'already', 'co', 'afterwards', 'formerly', 'within', 'seems', 'into', 'others', 'while', 'whatever', 'except', 'down', 'hers', 'everyone', 'done', 'least', 'another', 'whoever', 'moreover', 'couldnt', 'throughout', 'anyhow', 'yourself', 'three', 'from', 'her', 'few', 'together', 'top', 'there', 'due', 'been', 'next', 'anyone', 'eleven', 'much', 'call', 'therefore', 'interest', 'then', 'thru', 'themselves', 'hundred', 'was', 'sincere', 'empty', 'more', 'himself', 'elsewhere', 'mostly', 'on', 'fire', 'am', 'becoming', 'hereby', 'amongst', 'else', 'part', 'everywhere', 'too', 'herself', 'former', 'those', 'he', 'me', 'myself', 'made', 'twenty', 'these', 'bill', 'cant', 'us', 'until', 'besides', 'nevertheless', 'below', 'anywhere', 'nine', 'can', 'of', 'toward', 'my', 'something', 'and', 'whereafter', 'whenever', 'give', 'almost', 'wherever', 'is', 'describe', 'beforehand', 'herein', 'an', 'as', 'itself', 'at', 'have', 'in', 'seem', 'whence', 'ie', 'any', 'fill', 'again', 'hasnt', 'inc', 'thereby', 'thin', 'no', 'perhaps', 'latter', 'meanwhile', 'when', 'detail', 'same', 'wherein', 'beside', 'also', 'that', 'other', 'take', 'which', 'becomes', 'you', 'if', 'nobody', 'see', 'though', 'may', 'after', 'upon', 'most', 'hereupon', 'eight', 'but', 'serious', 'nothing', 'such', 'your', 'why', 'a', 'off', 'whereby', 'third', 'i', 'whole', 'noone', 'sometimes', 'well', 'amoungst', 'yours', 'their', 'rather', 'without', 'so', 'five', 'the', 'first', 'whereas', 'once'])
In [14]:
clf_5 = Pipeline([
    ('vect', TfidfVectorizer(
                stop_words=stop_words,
                token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",    
    )),
    ('clf', MultinomialNB()),
])
In [15]:
evaluate_cross_validation(clf_5, news.data, news.target, 5)
[ 0.88116711  0.89519767  0.88325816  0.89227912  0.88113558]
Mean score: 0.887 (+/-0.003)

Try to improve by adjusting the alpha parameter on the MultinomialNB classifier:

In [16]:
clf_7 = Pipeline([
    ('vect', TfidfVectorizer(
                stop_words=stop_words,
                token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",         
    )),
    ('clf', MultinomialNB(alpha=0.01)),
])
In [17]:
evaluate_cross_validation(clf_7, news.data, news.target, 5)
[ 0.9204244   0.91960732  0.91828071  0.92677103  0.91854603]
Mean score: 0.921 (+/-0.002)

The results had an important boost from 0.89 to 0.92, pretty good. At this point, we could continue doing trials by using different values of alpha or doing new modifications of the vectorizer. In Chapter 4, Advanced Features, we will show you practical utilities to try many different configurations and keep the best one.

If we decide that we have made enough improvements in our model, we are ready to evaluate its performance on the testing set.

In [18]:
from sklearn import metrics

def train_and_evaluate(clf, X_train, X_test, y_train, y_test):
    
    clf.fit(X_train, y_train)
    
    print "Accuracy on training set:"
    print clf.score(X_train, y_train)
    print "Accuracy on testing set:"
    print clf.score(X_test, y_test)
    
    y_pred = clf.predict(X_test)
    
    print "Classification Report:"
    print metrics.classification_report(y_test, y_pred)
    print "Confusion Matrix:"
    print metrics.confusion_matrix(y_test, y_pred)
In [19]:
train_and_evaluate(clf_7, X_train, X_test, y_train, y_test)
Accuracy on training set:
0.996957690675
Accuracy on testing set:
0.917869269949
Classification Report:
             precision    recall  f1-score   support

          0       0.95      0.88      0.91       216
          1       0.85      0.85      0.85       246
          2       0.91      0.84      0.87       274
          3       0.81      0.86      0.83       235
          4       0.88      0.90      0.89       231
          5       0.89      0.91      0.90       225
          6       0.88      0.80      0.84       248
          7       0.92      0.93      0.93       275
          8       0.96      0.98      0.97       226
          9       0.97      0.94      0.96       250
         10       0.97      1.00      0.98       257
         11       0.97      0.97      0.97       261
         12       0.90      0.91      0.91       216
         13       0.94      0.95      0.95       257
         14       0.94      0.97      0.95       246
         15       0.90      0.96      0.93       234
         16       0.91      0.97      0.94       218
         17       0.97      0.99      0.98       236
         18       0.95      0.91      0.93       213
         19       0.86      0.78      0.82       148

avg / total       0.92      0.92      0.92      4712

Confusion Matrix:
[[190   0   0   0   1   0   0   0   0   1   0   0   0   1   0   9   2   0
    0  12]
 [  0 208   5   3   3  13   4   0   0   0   0   1   3   2   3   0   0   1
    0   0]
 [  0  11 230  22   1   5   1   0   1   0   0   0   0   0   1   0   1   0
    1   0]
 [  0   6   6 202   9   3   4   0   0   0   0   0   4   0   1   0   0   0
    0   0]
 [  0   2   3   4 208   1   5   0   0   0   2   0   5   0   1   0   0   0
    0   0]
 [  0   9   2   2   1 205   0   1   1   0   0   0   0   2   1   0   0   1
    0   0]
 [  0   2   3  10   6   0 199  14   1   2   0   1   5   2   2   0   0   1
    0   0]
 [  0   1   1   1   1   0   6 257   4   1   0   0   0   1   0   0   2   0
    0   0]
 [  0   0   0   0   0   1   1   2 221   0   0   0   0   1   0   0   0   0
    0   0]
 [  0   0   0   0   0   0   1   0   2 236   5   0   1   3   0   1   1   0
    0   0]
 [  0   0   0   1   0   0   0   0   0   0 256   0   0   0   0   0   0   0
    0   0]
 [  0   0   0   0   0   1   0   1   0   0   0 254   0   1   0   0   3   0
    1   0]
 [  0   1   0   1   5   1   3   1   0   2   1   1 197   1   2   0   0   0
    0   0]
 [  0   1   0   1   1   0   0   0   0   0   0   2   2 245   3   0   1   0
    0   1]
 [  0   2   0   0   1   0   0   1   0   0   0   0   0   1 238   0   1   0
    1   1]
 [  1   0   1   2   0   0   0   1   0   0   0   1   1   0   1 225   0   1
    0   0]
 [  0   0   1   0   0   0   1   0   1   0   0   1   0   0   0   0 212   0
    2   0]
 [  0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 234
    1   0]
 [  0   0   0   0   0   0   1   0   0   0   0   2   1   1   0   1   7   3
  193   4]
 [  9   0   0   0   0   1   0   0   0   1   0   0   0   0   0  13   4   1
    4 115]]

As we can see, we obtained very good results, and as we would expect, the accuracy in the training set is quite better than in the testing set. We may expect, in new unseen instances, an accuracy of around 0.91.

If we look inside the vectorizer, we can see which tokens have been used to create our dictionary:

In [20]:
clf_7.named_steps['vect'].get_feature_names()
Out[20]:
[u'0-.66d8wt',
 u'0-04g55',
 u'0-100mph',
 u'0-13-117441-x--or',
 u'0-3mb',
 u'0-40mb',
 u'0-40volts',
 u'0-5mb',
 u'0-60mph',
 u'0-8.3mb',
 u'0-a00138',
 u'0-byte',
 u'0-defects',
 u'0-e8',
 u'0-for-4',
 u'0-hc',
 u'0-ii',
 u'0-uw',
 u'0-uw0',
 u'0-uw2',
 u'0-uwa',
 u'0-uwt',
 u'0-uwt7',
 u'0-uww',
 u'0-uww7',
 u'0.-w0',
 u'0..x-1',
 u'0.00...nice',
 u'0.02cents',
 u'0.0cb',
 u'0.1-ports',
 u'0.15mb',
 u'0.2d-_',
 u'0.5db',
 u'0.6-micron',
 u'0.65mb',
 u'0.97pl4',
 u'0.b34s_',
 u'0.c0rgo5kj7pp0',
 u'0.c4',
 u'0.jy',
 u'0.s_',
 u'0.tprv6ekj7r',
 u'0.tt',
 u'0.txa_',
 u'0.txc',
 u'0.vpp',
 u'0.vpsll2',
 u'00-index.txt',
 u'000-foot',
 u'000-kg',
 u'000-man',
 u'000-maxwell',
 u'000-strong',
 u'000000.active.spx',
 u'000062david42',
 u'000100255pixel',
 u'0005111312na1em',
 u'0005111312na3em',
 u'000hz',
 u'000iu',
 u'000mg',
 u'000mi',
 u'000miles',
 u'000puq9',
 u'000rpm',
 u'000th',
 u'000ug',
 u'000usd',
 u'0010580b.0b6r49',
 u'0010580b.vma7o9',
 u'0010580b.vmcbrt',
 u'001200201pixel',
 u'002251w.5.734117130',
 u'007bww3',
 u'007gjf3',
 u'00969fba.e640ff10',
 u'0096b0f0.c5de05a0',
 u'0096b11b.08a283a0',
 u'0096b294.aad9c1e0',
 u'00acearl',
 u'00am',
 u'00bjgood',
 u'00cgbabbitt',
 u'00cjmelching',
 u'00cmmiller',
 u'00ecgillespi',
 u'00ecgillespie',
 u'00index',
 u'00lz8bct',
 u'00mbstultz',
 u'00pm',
 u'00pm-9',
 u'00pmlemen',
 u'0100lines',
 u'01050810.vkcsbl',
 u'01050810.vuumdq',
 u'0123456789abcdef',
 u'014t4',
 u'01_introduction.ma',
 u'01apr93.17160985.0059',
 u'01c8',
 u'01f6',
 u'01h0',
 u'01ll',
 u'01ne',
 u'01ob',
 u'01vl2',
 u'01ya',
 u'02-0zl',
 u'02-jul-92',
 u'02-q9ign',
 u'020qw',
 u'026bf',
 u'02_math_model.ma',
 u'02_math_models.ma',
 u'02at',
 u'02bp1m51',
 u'02bz',
 u'02e0',
 u'02f8',
 u'02ixl',
 u'02mm',
 u'02qvq',
 u'02r4e',
 u'02tl',
 u'02tm_',
 u'02tmn',
 u'02va7pu',
 u'02vx',
 u'02vy',
 u'02vyn',
 u'02vz089',
 u'03-sep-1967',
 u'030-based',
 u'0300ff',
 u'03_1_transient_response.ma',
 u'03_2_transient_response.ma',
 u'03_3_transient_response.ma',
 u'03aa',
 u'03ab',
 u'03e8',
 u'03f8',
 u'03ho.lk8',
 u'03hord',
 u'03hz',
 u'03hz.b',
 u'03hz.cj1',
 u'03hz.fg',
 u'03hz.h8o.ci',
 u'03hzri',
 u'03i3',
 u'03ii',
 u'03ii.chzd9',
 u'03imv',
 u'03is',
 u'03j1.lk',
 u'03j1d9',
 u'03k8.chzv',
 u'03k8rg',
 u'03m4u',
 u'03u0',
 u'03vo',
 u'04.cesyy',
 u'0430-1500ut',
 u'0433nl',
 u'044tcya',
 u'045q2',
 u'046p4',
 u'046q2b5u',
 u'046sau',
 u'046um',
 u'04_steady_state_response.ma',
 u'04ax',
 u'04he',
 u'04hj',
 u'04hlal',
 u'04hm34u',
 u'04jdj',
 u'04mk',
 u'04p2',
 u'04trtcp',
 u'04wsedwjy',
 u'04x1',
 u'04zb',
 u'055555556q-34u',
 u'055555556ql34u-34u--jjjjjjj',
 u'05_root_locus.ma',
 u'05apr93.02451203.0049',
 u'05apr93.02678944.0049',
 u'05apr93.13661642.0023',
 u'05bc5cvfq',
 u'05dzu',
 u'05fh',
 u'05gd87g',
 u'05ic',
 u'05jl1i',
 u'05ll',
 u'05lma',
 u'05lxm34',
 u'05lxn',
 u'05pm',
 u'05rov',
 u'05s.5',
 u'0600lines',
 u'06_freq_response.ma',
 u'06a7_e9',
 u'06dz.b',
 u'06eh.c4',
 u'06eh.ya',
 u'06eh.yk6ql2',
 u'06f1',
 u'06hwke',
 u'06ku',
 u'06kv',
 u'06mz',
 u'06n.edo6',
 u'06paul',
 u'06s4bnv',
 u'06tz',
 u'06tzv',
 u'06umv',
 u'06w8',
 u'06zkc4',
 u'07-may-93',
 u'07.sl',
 u'07.v0',
 u'07220yfz',
 u'0776ov_h',
 u'07_state_space.ma',
 u'07aq',
 u'07cgk',
 u'07iz',
 u'07l99',
 u'07lhs',
 u'07qnjbue',
 u'07sc',
 u'07tic',
 u'0824e2vyn',
 u'088z.lk',
 u'08m9.sl',
 u'08oz',
 u'08u12',
 u'08ws',
 u'09_d2p',
 u'09aa',
 u'09g9',
 u'09h3o',
 u'09k_',
 u'09m81h',
 u'09oxdk',
 u'09w0f',
 u'0_8ge',
 u'0_e8',
 u'0_h1',
 u'0_kp82_5',
 u'0_ww',
 u'0_zs',
 u'0a000',
 u'0a1',
 u'0a3',
 u'0a34',
 u'0a4pirs-f0um15',
 u'0a7h23iai7',
 u'0a99',
 u'0ab15j2qf3f',
 u'0adh',
 u'0ae6c',
 u'0ain',
 u'0am',
 u'0amo',
 u'0aujqb',
 u'0av',
 u'0aw',
 u'0ax',
 u'0ayf',
 u'0b-a',
 u'0b-x2',
 u'0b1fatransfer',
 u'0b2',
 u'0b4dam',
 u'0b6er',
 u'0b8',
 u'0b800',
 u'0b800h',
 u'0bh',
 u'0bj',
 u'0bla',
 u'0bm2',
 u'0bn',
 u'0bnw',
 u'0bus',
 u'0bv',
 u'0bvm005',
 u'0bz',
 u'0c000',
 u'0c4v',
 u'0c5r',
 u'0c800',
 u'0cdwkv_',
 u'0cg',
 u'0cgf',
 u'0ct1t',
 u'0cz',
 u'0d-jm',
 u'0d.8',
 u'0d.x',
 u'0d1',
 u'0d2',
 u'0d36b',
 u'0d4',
 u'0d6',
 u'0d7',
 u'0d84.sz',
 u'0db',
 u'0ded',
 u'0df',
 u'0dfsx',
 u'0dfvij',
 u'0dfyl',
 u'0dgq',
 u'0dgw83',
 u'0dh',
 u'0di',
 u'0dj',
 u'0dl',
 u'0dn1',
 u'0dnynno-7',
 u'0doh7',
 u'0du',
 u'0dum',
 u'0dvf2l',
 u'0dy.tm',
 u'0e000',
 u'0e1',
 u'0e3udg11',
 u'0e4',
 u'0e75',
 u'0e75x',
 u'0e9',
 u'0e97pm4',
 u'0e97pm8',
 u'0e97pms8',
 u'0echy',
 u'0ek',
 u'0ek-c8v',
 u'0ek-c8v-c8v-c8v-c9n',
 u'0ek-c9n',
 u'0ek-c9nv1',
 u'0ekr',
 u'0en36',
 u'0ep',
 u'0erdivbud',
 u'0ex',
 u'0ex6',
 u'0ez',
 u'0f.1p',
 u'0f000',
 u'0f0064',
 u'0f1',
 u'0f18fa5b225d03d3a401973b4318dd0e',
 u'0f1u',
 u'0f3',
 u'0f8',
 u'0ffnm',
 u'0fgj5',
 u'0fh',
 u'0fhmt',
 u'0fj',
 u'0fo0',
 u'0forqfa00iuzmatnmz',
 u'0fovj7i00wb4miumht',
 u'0fpzy',
 u'0fq',
 u'0frolv200awvi3iv4s',
 u'0fs',
 u'0fv8',
 u'0fw',
 u'0fz1mtpe',
 u'0g12o',
 u'0g19',
 u'0g4',
 u'0g8',
 u'0g_g',
 u'0gg',
 u'0ggu',
 u'0ggv',
 u'0gi',
 u'0gij',
 u'0giyx',
 u'0gj',
 u'0gl',
 u'0gyts',
 u'0gz',
 u'0h-0',
 u'0h-p',
 u'0h0',
 u'0h2',
 u'0h23tc',
 u'0h4ou',
 u'0h6o481w8h1t2',
 u'0h6xl',
 u'0h8',
 u'0h9',
 u'0h9_',
 u'0ha',
 u'0ha7b0',
 u'0hb',
 u'0hd',
 u'0hdf',
 u'0hg',
 u'0hg8erx',
 u'0hgw',
 u'0hh9',
 u'0hjt',
 u'0hm',
 u'0hpg5x-t',
 u'0hq',
 u'0hq4',
 u'0ht',
 u'0hyx',
 u'0i-5u',
 u'0i.3',
 u'0i.bn',
 u'0i0_',
 u'0i281',
 u'0i3rq',
 u'0i7cx',
 u'0i91n',
 u'0ic',
 u'0ieo2el',
 u'0ih',
 u'0ij',
 u'0is0',
 u'0iv',
 u'0ivbg6',
 u'0ivbtm9',
 u'0ivbud',
 u'0ivbud9',
 u'0ivbudk',
 u'0ivbvl',
 u'0ivc',
 u'0ive',
 u'0ive8',
 u'0ivf1dk',
 u'0ivf2l',
 u'0ivmhm',
 u'0ivmhm9',
 u'0ivmiu',
 u'0ivmk',
 u'0iwx.c',
 u'0iwx.c0rvl',
 u'0j4',
 u'0j5',
 u'0j5-57',
 u'0j_0-3',
 u'0ja3d',
 u'0jb6pwzasj',
 u'0je',
 u'0jeiq',
 u'0jf',
 u'0jh',
 u'0jkh',
 u'0jr',
 u'0jt1',
 u'0jx',
 u'0jx5gvp',
 u'0jy',
 u'0jz',
 u'0k-2u',
 u'0k5',
 u'0k82',
 u'0k83a',
 u'0k9jsu',
 u'0kbfkp',
 u'0kd',
 u'0kh',
 u'0khp',
 u'0kj',
 u'0km',
 u'0km2',
 u'0kqi',
 u'0kr',
 u'0ks',
 u'0ksx',
 u'0kt',
 u'0kwbw',
 u'0l0',
 u'0l5h06j',
 u'0l5h06l',
 u'0l7',
 u'0la1z',
 u'0lc',
 u'0lhf',
 u'0li',
 u'0ll',
 u'0lme3vkdw6wo',
 u'0lnbq',
 u'0lnnm',
 u'0lo',
 u'0lowt',
 u'0lq',
 u'0ls8',
 u'0lsbd',
 u'0lu',
 u'0lv',
 u'0lv1a4e3',
 u'0lzi3-z-5pzk8',
 u'0m0x',
 u'0m1qz',
 u'0m2',
 u'0m5',
 u'0m6bq',
 u'0m75',
 u'0m75de06b4q',
 u'0m75u',
 u'0m75u9',
 u'0m8',
 u'0m8b',
 u'0m8bnh',
 u'0m8w',
 u'0ma',
 u'0max',
 u'0mbz',
 u'0mc',
 u'0megyt',
 u'0mez-k9k',
 u'0mf',
 u'0mi',
 u'0mis',
 u'0mjx9',
 u'0mk',
 u'0mk.sl',
 u'0mk80',
 u'0mkg',
 u'0ml',
 u'0mm2',
 u'0moa',
 u'0mph',
 u'0mq69',
 u'0ms0',
 u'0msd',
 u'0mvbdi',
 u'0mvbf',
 u'0mvbgt',
 u'0mvbtmvo2',
 u'0mvh',
 u'0mvmk',
 u'0mxb',
 u'0n1',
 u'0n3',
 u'0n5',
 u'0nb',
 u'0ne1',
 u'0neat',
 u'0nf',
 u'0nfdh',
 u'0nh',
 u'0ni4',
 u'0niy',
 u'0nq',
 u'0nt',
 u'0ntmrn',
 u'0ntv.273g',
 u'0o-y',
 u'0o2',
 u'0o2d',
 u'0o3',
 u'0o6kgm',
 u'0o_2',
 u'0oa',
 u'0ods2b8',
 u'0of',
 u'0oft',
 u'0ogl',
 u'0oi',
 u'0ol',
 u'0ol63',
 u'0olmi2',
 u'0omdcua8a4',
 u'0opco-dw',
 u'0oqis',
 u'0os8is7u',
 u'0otv',
 u'0otz',
 u'0p0',
 u'0p1i5',
 u'0p38',
 u'0p4u-34u',
 u'0p5f',
 u'0p6',
 u'0p6a3b1w165w',
 u'0p7',
 u'0p8',
 u'0p8jiac',
 u'0p9c',
 u'0p_7924x',
 u'0pd',
 u'0pe0',
 u'0pfbd',
 u'0pj',
 u'0pl4',
 u'0pn',
 u'0pp',
 u'0prum0q',
 u'0ptm',
 u'0pto',
 u'0pvhr4',
 u'0pvx',
 u'0px8r',
 u'0pxf1l',
 u'0pxve0b',
 u'0pzwx',
 u'0q-_0',
 u'0q.-xny5gx',
 u'0q.x1',
 u'0q1t',
 u'0q76t',
 u'0qas',
 u'0qax',
 u'0qb',
 u'0qhh',
 u'0qljfw',
 u'0qm0n0',
 u'0qq',
 u'0qqiyay6s',
 u'0qu',
 u'0quh',
 u'0qur',
 u'0qv',
 u'0qvn',
 u'0qvq',
 u'0qvql6s3b',
 u'0qvqma',
 u'0qvqn',
 u'0qvqn1',
 u'0qw',
 u'0qwa',
 u'0qwol6s3',
 u'0qwomk4',
 u'0qxp',
 u'0r.da',
 u'0r1l40',
 u'0r2',
 u'0r445',
 u'0r66',
 u'0r_',
 u'0razbbh107h',
 u'0rchzv',
 u'0rdf',
 u'0rfumrd',
 u'0rgt9',
 u'0rhj',
 u'0rht',
 u'0rk',
 u'0rn',
 u'0rr',
 u'0rtr-58',
 u'0ru',
 u'0rv',
 u'0ry48x',
 u'0s03',
 u'0s1',
 u'0s2',
 u'0s4',
 u'0s792j8jdn',
 u'0s9',
 u'0sc3',
 u'0scr',
 u'0sk',
 u'0sl',
 u'0sla0t',
 u'0slrmc',
 u'0smpax',
 u'0std',
 u'0sx',
 u'0sz',
 u'0sz5u',
 u'0sz6_f',
 u'0t-l',
 u'0t-w',
 u'0t-wb',
 u'0t-wi_6ukx',
 u'0t-wj',
 u'0t-wm',
 u'0t-wmxg',
 u'0t-wmz',
 u'0t.u',
 u'0t2o',
 u'0t7-u',
 u'0t7s',
 u'0tb',
 u'0tbxn',
 u'0tbxom',
 u'0tbxom4',
 u'0tbxom4u-3l',
 u'0tfj',
 u'0tg',
 u'0tgx',
 u'0tgx8',
 u'0th',
 u'0thbq',
 u'0tig1',
 u'0tk',
 u'0tmobi',
 u'0tn',
 u'0tp',
 u'0tq',
 u'0tq33',
 u'0tq6',
 u'0trb',
 u'0tsi',
 u'0tsq3',
 u'0tu525vk',
 u'0tv_g',
 u'0tzv',
 u'0u-1y',
 u'0u1',
 u'0u14',
 u'0u140w',
 u'0u2',
 u'0u48c',
 u'0u59',
 u'0ud',
 u'0ue',
 u'0ulx',
 u'0un',
 u'0up9',
 u'0urx',
 u'0uuv',
 u'0uv',
 u'0uxgblk',
 u'0uy',
 u'0v.zlp',
 u'0v2',
 u'0v34b',
 u'0v6p',
 u'0va',
 u'0vah',
 u'0vah8',
 u'0vb.y',
 u'0vbs',
 u'0vcol6s3',
 u'0vcol6s3m',
 u'0vd',
 u'0vet',
 u'0vg',
 u'0vg09_',
 u'0vkjsj1',
 u'0vkjzl',
 u'0vkzyrah8',
 u'0vow8',
 u'0vpy8cl',
 u'0vuimj',
 u'0vv6',
 u'0vx48',
 u'0vy4b',
 u'0w-p',
 u'0w0',
 u'0w013',
 u'0w25z0r8p',
 u'0w2z2b1w164w',
 u'0w3',
 u'0w4',
 u'0w5',
 u'0w5r',
 u'0w8',
 u'0w9_',
 u'0wa',
 u'0wa8nfu',
 u'0wax0t',
 u'0wc',
 u'0we',
 u'0wiz',
 u'0wj',
 u'0wk',
 u'0wk.lhz.d',
 u'0wkg',
 u'0wkz',
 u'0wmfhhm',
 u'0ws',
 u'0wu',
 u'0wv',
 u'0ww',
 u'0wxb',
 u'0wzu',
 u'0x.xn',
 u'0x0',
 u'0x00',
 u'0x00-0x1f',
 u'0x000c',
 u'0x000f',
 u'0x0069',
 u'0x01',
 u'0x02',
 u'0x03',
 u'0x03f8',
 u'0x04',
 u'0x08',
 u'0x0f',
 u'0x10',
 u'0x100',
 u'0x14',
 u'0x170',
 u'0x2',
 u'0x20',
 u'0x200',
 u'0x201',
 u'0x21',
 u'0x22',
 u'0x23',
 u'0x24',
 u'0x25',
 u'0x27',
 u'0x280',
 u'0x29',
 u'0x2e0',
 u'0x2e8',
 u'0x30',
 u'0x360-0x37f',
 u'0x37a',
 u'0x37f',
 u'0x38',
 u'0x3c',
 u'0x3d4',
 u'0x3f',
 u'0x40',
 u'0x4000',
 u'0x400000',
 u'0x4d42',
 u'0x500043',
 u'0x5i',
 u'0x6',
 u'0x60',
 u'0x62',
 u'0x6b',
 u'0x6kj4m',
 u'0x7',
 u'0x7f',
 u'0x80',
 u'0x800000',
 u'0x80079',
 u'0x8007a',
 u'0x8007b',
 u'0x80080',
 u'0x80083',
 u'0x9ma1f6',
 u'0x9o01f6',
 u'0xa',
 u'0xa00000',
 u'0xa3',
 u'0xa7',
 u'0xa8',
 u'0xb',
 u'0xb00003',
 u'0xbum1',
 u'0xc0',
 u'0xc010',
 u'0xc018',
 u'0xd',
 u'0xd000',
 u'0xd0000d',
 u'0xd0001d',
 u'0xe0000d',
 u'0xff',
 u'0xff00',
 u'0xff0000',
 u'0xffffffff',
 u'0xh',
 u'0xhb',
 u'0xhi',
 u'0xi',
 u'0xw',
 u'0xx',
 u'0y.3dy',
 u'0y3',
 u'0y4',
 u'0yf07nq94',
 u'0yg',
 u'0yh_0lxiad',
 u'0yhk',
 u'0yi0v',
 u'0yj',
 u'0yldm',
 u'0ytj',
 u'0yu85',
 u'0yxey',
 u'0yxt',
 u'0z-1l',
 u'0z.xf',
 u'0z00.iim0',
 u'0z4',
 u'0z4dw_lqxk',
 u'0z7w_',
 u'0z8',
 u'0zd',
 u'0zphu',
 u'0zsq9',
 u'0zsqk',
 u'0zv',
 u'0zx',
 u'0zy',
 u'0zz',
 u'0zz8b',
 u'0zznm',
 u'0zzum',
 u'0zzum1',
 u'1--q9ux',
 u'1-.jhhgczw',
 u'1-0-att-0-700-wmurray',
 u'1-15amp',
 u'1-1p0b4p2',
 u'1-2mb',
 u'1-35io',
 u'1-38.ay.nk',
 u'1-3ghz',
 u'1-3kbyte',
 u'1-408-730-5750....sam',
 u'1-408-736-2000...fax',
 u'1-5w22ya9',
 u'1-6eiqy',
 u'1-7sl',
 u'1-800-245-unix',
 u'1-800-3-ibm-os2',
 u'1-800-388-plus',
 u'1-800-3cl-aris',
 u'1-800-4-cancer',
 u'1-800-441-math',
 u'1-800-828-unix',
 u'1-800-832-4778.....sam',
 u'1-800-886-lyme',
 u'1-800-8applix',
 u'1-800-ama-join',
 u'1-800-clu-bmac',
 u'1-800-dataset',
 u'1-800-digital',
 u'1-800-efa-1000',
 u'1-800-hpclass',
 u'1-800-mac-stuf',
 u'1-800-mac-usa1',
 u'1-800-sos-apple',
 u'1-800-trainer',
 u'1-800-uwh-iner',
 u'1-900-got-srcs',
 u'1-900-quoteme',
 u'1-apr-92',
 u'1-b-1',
 u'1-bit',
 u'1-bit-per-pixel',
 u'1-canucks',
 u'1-day',
 u'1-etha0',
 u'1-foot',
 u'1-for-1',
 u'1-in-3',
 u'1-inch',
 u'1-kh',
 u'1-line',
 u'1-mar-93',
 u'1-meg',
 u'1-mile',
 u'1-millisecond',
 u'1-pc',
 u'1-penguins',
 u'1-qn2j2l8',
 u'1-run',
 u'1-screen',
 u'1-slight',
 u'1-to-1',
 u'1-u9',
 u'1-xpi',
 u'1-y2',
 u'1-year',
 u'1.054589e-34',
 u'1.0b14',
 u'1.0b15',
 u'1.0b16',
 u'1.16mg',
 u'1.1scd1',
 u'1.20in-reply-to',
 u'1.24in-reply-to',
 u'1.25mb',
 u'1.2f33enh',
 u'1.2gb',
 u'1.2mb',
 u'1.2meg',
 u'1.327e20',
 u'1.33mb',
 u'1.3807e-23',
 u'1.3ci',
 u'1.3mb',
 u'1.4-b1',
 u'1.41.uac',
 u'1.42e-4',
 u'1.44mb',
 u'1.44meg',
 u'1.44mg',
 u'1.496e11',
 u'1.4e-23',
 u'1.4mb',
 ...]
In [21]:
print len(clf_7.named_steps['vect'].get_feature_names())
145771