如何在python中使用tf-idf svm sklearn绘制文本分类
作者:互联网
我已经按照this tutorial的教程使用tf-idf和SVM实现了文本分类
分类工作正常.
现在我想绘制tf-idf值(即特征),并查看最终超平面如何生成,将数据分类为两个类.
实施的代码如下:
import os
import numpy as np
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import StratifiedKFold
def make_Corpus(root_dir):
polarity_dirs = [os.path.join(root_dir,f) for f in os.listdir(root_dir)]
corpus = []
for polarity_dir in polarity_dirs:
reviews = [os.path.join(polarity_dir,f) for f in os.listdir(polarity_dir)]
for review in reviews:
doc_string = "";
with open(review) as rev:
for line in rev:
doc_string = doc_string + line
if not corpus:
corpus = [doc_string]
else:
corpus.append(doc_string)
return corpus
#Create a corpus with each document having one string
root_dir = 'txt_sentoken'
corpus = make_Corpus(root_dir)
#Stratified 10-cross fold validation with SVM and Multinomial NB
labels = np.zeros(2000);
labels[0:1000]=0;
labels[1000:2000]=1;
kf = StratifiedKFold(n_splits=10)
totalsvm = 0 # Accuracy measure on 2000 files
totalNB = 0
totalMatSvm = np.zeros((2,2)); # Confusion matrix on 2000 files
totalMatNB = np.zeros((2,2));
for train_index, test_index in kf.split(corpus,labels):
X_train = [corpus[i] for i in train_index]
X_test = [corpus[i] for i in test_index]
y_train, y_test = labels[train_index], labels[test_index]
vectorizer = TfidfVectorizer(min_df=5, max_df = 0.8, sublinear_tf=True, use_idf=True,stop_words='english')
train_corpus_tf_idf = vectorizer.fit_transform(X_train)
test_corpus_tf_idf = vectorizer.transform(X_test)
model1 = LinearSVC()
model2 = MultinomialNB()
model1.fit(train_corpus_tf_idf,y_train)
model2.fit(train_corpus_tf_idf,y_train)
result1 = model1.predict(test_corpus_tf_idf)
result2 = model2.predict(test_corpus_tf_idf)
totalMatSvm = totalMatSvm + confusion_matrix(y_test, result1)
totalMatNB = totalMatNB + confusion_matrix(y_test, result2)
totalsvm = totalsvm+sum(y_test==result1)
totalNB = totalNB+sum(y_test==result2)
print totalMatSvm, totalsvm/2000.0, totalMatNB, totalNB/2000.0
我已经阅读了如何绘制图形,但找不到任何与绘制tf-idf的特征以及SVM生成的超平面相关的教程.
解决方法:
首先,您需要仅选择2个要素才能创建二维决策曲面图.
使用一些合成数据的示例:
from sklearn.svm import SVC
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
newsgroups_train = fetch_20newsgroups(subset='train',
categories=['alt.atheism', 'sci.space'])
pipeline = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer())])
X = pipeline.fit_transform(newsgroups_train.data).todense()
# Select ONLY 2 features
X = np.array(X)
X = X[:, [0,1]]
y = newsgroups_train.target
def make_meshgrid(x, y, h=.02):
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
model = svm.SVC(kernel='linear')
clf = model.fit(X, y)
fig, ax = plt.subplots()
# title for the plots
title = ('Decision surface of linear SVC ')
# Set-up grid for plotting.
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
ax.set_ylabel('y label here')
ax.set_xlabel('x label here')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
ax.legend()
plt.show()
结果
情节并不好,因为我们只选择了2个特征来创建它.使其变得更好的一种方法是:您可以使用单变量排名方法(例如ANOVA F值测试)并找到最佳的前2个特征.然后使用这些top-2你可以创建一个漂亮的分离表面图.
标签:python,scikit-learn,graph,tf-idf,svm 来源: https://codeday.me/bug/20190928/1826134.html