如何防止在NLTK中拆分特定的单词或短语和数字?
作者:互联网
当我对分割特定单词,日期和数字的文本进行标记时,我在文本匹配方面存在问题.如何在NLTK中对单词进行标记时,可以防止“在我的家庭中跑”,“30分钟步行”或“每天4次”这样的短语?
它们不应导致:
['runs','in','my','family','4x','a','day']
例如:
Yes 20-30 minutes a day on my bike, it works great!!
得到:
['yes','20-30','minutes','a','day','on','my','bike',',','it','works','great']
我希望’20 -30分钟’被视为一个单词.我怎样才能获得此行为>?
解决方法:
据我所知,你会很难保留不同长度的n-gram作为标记,但你可以找到这些n-gram,如here所示.然后,你可以将你想要的语料库中的项目替换为n -grams有一些像破折号一样的加入字符.
这是一个示例解决方案,但可能有很多方法可以实现.重要提示:我提供了一种查找文本中常见的ngrams的方法(您可能需要多于1个,因此我在其中放置一个变量,以便您可以决定要收集多少个ngram.您可能需要一个不同的数字对于每种类型,但我现在只给出了1个变量.)这可能会错过你发现重要的ngrams.为此,您可以将要查找的内容添加到user_grams.这些将被添加到搜索中.
import nltk
#an example corpus
corpus='''A big tantrum runs in my family 4x a day, every week.
A big tantrum is lame. A big tantrum causes strife. It runs in my family
because of our complicated history. Every week is a lot though. Every week
I dread the tantrum. Every week...Here is another ngram I like a lot'''.lower()
#tokenize the corpus
corpus_tokens = nltk.word_tokenize(corpus)
#create ngrams from n=2 to 5
bigrams = list(nltk.ngrams(corpus_tokens,2))
trigrams = list(nltk.ngrams(corpus_tokens,3))
fourgrams = list(nltk.ngrams(corpus_tokens,4))
fivegrams = list(nltk.ngrams(corpus_tokens,5))
本节找到最多5_grams的常见ngram.
#if you change this to zero you will only get the user chosen ngrams
n_most_common=1 #how many of the most common n-grams do you want.
fdist_bigrams = nltk.FreqDist(bigrams).most_common(n_most_common) #n most common bigrams
fdist_trigrams = nltk.FreqDist(trigrams).most_common(n_most_common) #n most common trigrams
fdist_fourgrams = nltk.FreqDist(fourgrams).most_common(n_most_common) #n most common four grams
fdist_fivegrams = nltk.FreqDist(fivegrams).most_common(n_most_common) #n most common five grams
#concat the ngrams together
fdist_bigrams=[x[0][0]+' '+x[0][1] for x in fdist_bigrams]
fdist_trigrams=[x[0][0]+' '+x[0][1]+' '+x[0][2] for x in fdist_trigrams]
fdist_fourgrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3] for x in fdist_fourgrams]
fdist_fivegrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3]+' '+x[0][4] for x in fdist_fivegrams]
#next 4 lines create a single list with important ngrams
n_grams=fdist_bigrams
n_grams.extend(fdist_trigrams)
n_grams.extend(fdist_fourgrams)
n_grams.extend(fdist_fivegrams)
本部分允许您将自己的ngrams添加到列表中
#Another option here would be to make your own list of the ones you want
#in this example I add some user ngrams to the ones found above
user_grams=['ngram1 I like', 'ngram 2', 'another ngram I like a lot']
user_grams=[x.lower() for x in user_grams]
n_grams.extend(user_grams)
最后一部分执行处理,以便您可以再次进行标记,并将ngrams作为标记.
#initialize the corpus that will have combined ngrams
corpus_ngrams=corpus
#here we go through the ngrams we found and replace them in the corpus with
#version connected with dashes. That way we can find them when we tokenize.
for gram in n_grams:
gram_r=gram.replace(' ','-')
corpus_ngrams=corpus_ngrams.replace(gram, gram.replace(' ','-'))
#retokenize the new corpus so we can find the ngrams
corpus_ngrams_tokens= nltk.word_tokenize(corpus_ngrams)
print(corpus_ngrams_tokens)
Out: ['a-big-tantrum', 'runs-in-my-family', '4x', 'a', 'day', ',', 'every-week', '.', 'a-big-tantrum', 'is', 'lame', '.', 'a-big-tantrum', 'causes', 'strife', '.', 'it', 'runs-in-my-family', 'because', 'of', 'our', 'complicated', 'history', '.', 'every-week', 'is', 'a', 'lot', 'though', '.', 'every-week', 'i', 'dread', 'the', 'tantrum', '.', 'every-week', '...']
我认为这实际上是一个非常好的问题.
标签:python,tokenize,nltk,phrase 来源: https://codeday.me/bug/20191002/1844581.html