如何使用gensim使用训练有素的LDA模型预测新查询的主题?
作者:互联网
我使用gensim训练了一个用于LDA主题建模的语料库.
浏览gensim网站上的教程(这不是整个代码):
question = 'Changelog generation from Github issues?';
temp = question.lower()
for i in range(len(punctuation_string)):
temp = temp.replace(punctuation_string[i], '')
words = re.findall(r'\w+', temp, flags = re.UNICODE | re.LOCALE)
important_words = []
important_words = filter(lambda x: x not in stoplist, words)
print important_words
dictionary = corpora.Dictionary.load('questions.dict')
ques_vec = []
ques_vec = dictionary.doc2bow(important_words)
print dictionary
print ques_vec
print lda[ques_vec]
这是我得到的输出:
['changelog', 'generation', 'github', 'issues']
Dictionary(15791 unique tokens)
[(514, 1), (3625, 1), (3626, 1), (3627, 1)]
[(4, 0.20400000000000032), (11, 0.20400000000000032), (19, 0.20263215848547525), (29, 0.20536784151452539)]
我不知道最后一个输出是如何帮助我找到问题的可能主题!
请帮忙!
解决方法:
我在python中编写了一个函数,为新查询提供了可能的主题:
def getTopicForQuery (question):
temp = question.lower()
for i in range(len(punctuation_string)):
temp = temp.replace(punctuation_string[i], '')
words = re.findall(r'\w+', temp, flags = re.UNICODE | re.LOCALE)
important_words = []
important_words = filter(lambda x: x not in stoplist, words)
dictionary = corpora.Dictionary.load('questions.dict')
ques_vec = []
ques_vec = dictionary.doc2bow(important_words)
topic_vec = []
topic_vec = lda[ques_vec]
word_count_array = numpy.empty((len(topic_vec), 2), dtype = numpy.object)
for i in range(len(topic_vec)):
word_count_array[i, 0] = topic_vec[i][0]
word_count_array[i, 1] = topic_vec[i][1]
idx = numpy.argsort(word_count_array[:, 1])
idx = idx[::-1]
word_count_array = word_count_array[idx]
final = []
final = lda.print_topic(word_count_array[0, 0], 1)
question_topic = final.split('*') ## as format is like "probability * topic"
return question_topic[1]
在完成此操作之前,请参阅this链接!
在代码的初始部分,正在对查询进行预处理,以便可以删除停用词和不必要的标点符号.
然后,加载使用我们自己的数据库创建的字典.
然后,我们将新查询的标记转换为单词包,然后通过topic_vec = lda [ques_vec]计算查询的主题概率分布,其中lda是训练模型,如上面提到的链接中所解释的.
然后根据主题的概率对分布进行排序.然后通过question_topic [1]显示概率最高的主题.
标签:python,nlp,gensim,lda,topic-modeling 来源: https://codeday.me/bug/20190529/1178527.html