其他分享
首页 > 其他分享> > jieba 分词. 西游记相关的分词,出现次数最高的20个。

jieba 分词. 西游记相关的分词,出现次数最高的20个。

作者:互联网

 1 import  jieba
 2  
 3 txt = open("D:\\西游记.txt", "r", encoding='gb18030').read()
 4 words = jieba.lcut(txt)     # 使用精确模式对文本进行分词
 5 counts = {}     # 通过键值对的形式存储词语及其出现的次数
 6  
 7 for word in words:
 8     if len(word) == 1:
 9         continue   
10     elif word == "大圣" or word=="老孙" or word=="行者" or word=="孙大圣" or word=="孙行者" or word=="猴王" or word=="悟空" or word=="齐天大圣" or word=="猴子":
11         rword = "孙悟空"
12     elif word == "师父" or word == "三藏" or word=="圣僧":
13         rword = "唐僧"
14     elif word == "呆子" or word=="八戒" or word=="老猪":
15         rword = "猪八戒"
16     elif word=="沙和尚":
17         rword="沙僧"
18     elif word == "妖精" or word=="妖魔" or word=="妖道":
19         rword = "妖怪"
20     elif word=="佛祖":
21         rword="如来"
22     elif word=="三太子":
23         rword="白马"
24     else:
25         rword = word
26     counts[rword] = counts.get(rword,0) + 1
27          
28 items = list(counts.items())#将键值对转换成列表
29 items.sort(key=lambda x: x[1], reverse=True)    # 根据词语出现的次数进行从大到小排序
30  
31 for i in range(20):
32     word, count = items[i]
33     print("{0:<10}{1:>5}".format(word, count))

标签:jieba,rword,word,items,elif,20,counts,分词
来源: https://www.cnblogs.com/wzl0727/p/13975204.html