Python中更快的缩编技术
作者:互联网
我试图找到一种使用NLTK Word Net Lemmatizer对列表(命名文本)中的单词进行词法化的更快方法.显然,这是我整个程序中最耗时的步骤(使用cProfiler查找相同的步骤).
以下是我正在尝试优化速度的一段代码-
def lemmed(text):
l = len(text)
i = 0
wnl = WordNetLemmatizer()
while (i<l):
text[i] = wnl.lemmatize(text[i])
i = i + 1
return text
使用lemmatizer会使我的性能降低20倍.任何帮助,将不胜感激.
解决方法:
如果您有几个核心可用,请尝试使用多处理库:
from nltk import WordNetLemmatizer
from multiprocessing import Pool
def lemmed(text, cores=6): # tweak cores as needed
with Pool(processes=cores) as pool:
wnl = WordNetLemmatizer()
result = pool.map(wnl.lemmatize, text)
return result
sample_text = ['tests', 'friends', 'hello'] * (10 ** 6)
lemmed_text = lemmed(sample_text)
assert len(sample_text) == len(lemmed_text) == (10 ** 6) * 3
print(lemmed_text[:3])
# => ['test', 'friend', 'hello']
标签:performance,python-3-x,nltk,lemmatization,python 来源: https://codeday.me/bug/20191118/2028541.html