其他分享
首页 > 其他分享> > 朴素贝叶斯(3)

朴素贝叶斯(3)

作者:互联网

通俗来说,贝叶斯是在计算概率值,而朴素贝叶斯假设先验数据类别均相互独立。

先验数据--建立已知数据及已知类别

测试数据--计算属于先验数据的条件概率,属于该类数据类别的概率越高则被预测为该类

训练部分代码:

def trainNB0(trainMatrix,trainCategory):
    # 样本数据集:trainMatrix
    # 样本标签:trainCategory
    numTrainDocs = len(trainMatrix)    # 样本总数
    numWords = len(trainMatrix[0])     # 样本特征数
    pAbusive = sum(trainCategory)/float(numTrainDocs)    # 条件概率分母部分
    p0Num = ones(numWords); p1Num = ones(numWords)
    p0Denom = 2.0; p1Denom = 2.0
    for i in range(numTrainDocs):
        if trainCategory[i] == 1:
            p1Num += trainMatrix[i]
            p1Denom += sum(trainMatrix[i])
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    p1Vect = log(p1Num/p1Denom)    # 为1的条件概率分子部分
    p0Vect = log(p0Num/p0Denom)    # 为0的条件概率分子部分
    return p0Vect,p1Vect,pAbusive

分类部分代码:

def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
    # vec2Classify:待分类的向量
    # p0Vec, p1Vec, pClass1:训练数据提供的概率值    
    p1 = sum(vec2Classify * p1Vec) + log(pClass1)  # element-wise mult
    p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)
    if p1 > p0:
        return 1
    else:
        return 0

测试部分代码:

def testingNB():
    listOPosts, listClasses = loadDataSet()    # 导入数据集
    myVocabList = createVocabList(listOPosts)
    trainMat = []
    for postinDoc in listOPosts:
        trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
    p0V, p1V, pAb = trainNB0(array(trainMat), array(listClasses))    # 训练
    testEntry = ['love', 'my', 'dalmation']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb))
    testEntry = ['stupid', 'garbage']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb))

应用举例:

问题描述--判断邮件是否为垃圾邮件,借由一些关键词,带有这些词的即为垃圾邮件

样本数据--邮件集,词语标签【关键词,非关键词】

eg,【邮件】 hello【非关键词0】!stupid【关键词1】 dog【关键词1】!

新给出邮件,判断是否为垃圾邮件

标签:testEntry,--,sum,关键词,贝叶斯,trainCategory,朴素,trainMatrix
来源: https://blog.csdn.net/harden1013/article/details/122533807