其他分享
首页 > 其他分享> > 【刘知远NLP课 整理】Phrase & Sentence & Document Representation

【刘知远NLP课 整理】Phrase & Sentence & Document Representation

作者:互联网

【刘知远NLP课 整理】Phrase & Sentence & Document Representation

There are multi-grained semantic units in natural languages such as word, phrase, sentence, document. We have seen how to learn a word representation in link. In this post, we will focus on phrase, sentence and document representation learning.


Phrase Representation

Suppose a phrase \(p\) is composed by two words \(u\) and \(v\). The phrase embedding \(p\) is learned from its words' embedding, \(u\) and \(v\). Phrase representation methods can be divided into three categories: additive models, multiplicative models and others.

Additive Models

  1. Vector addition:\(\boldsymbol{p}=\boldsymbol{u}+\boldsymbol{v}\)

  2. Weight the constituents differentially in the addition: \(\boldsymbol{p}=\alpha \boldsymbol{u}+\beta \boldsymbol{v}\)(where

    标签:NLP,word,RNN,Sentence,sentence,boldsymbol,Representation,representation,model
    来源: https://www.cnblogs.com/thousfeet/p/15144004.html