其他分享
首页 > 其他分享> > RETRO

RETRO

作者:互联网

RETRO

姚伟峰(Matrix Yao)

Info Card

Basic Idea

RETRO is a neural language model.
Comparing w/ existing language models like GPT, it separates memorization and generalization, so memorize the world knowledge w/ Retrieval, while learn the language structure w/ Model.

General auto-regressive language model

RETRO’s chunked retrieval enhanced model

 

Alt text

Below diagram from [1] is not the whole picture of RETRO. It’s just the retrieval part.
Alt text

How Does it Work

Step-1: Retrieve Nearest Neighbors and Encode them

Alt text

Step-2: Decode Causally

Alt text

Results

Language Model

Pretty good bits-per-byte even 23+x smaller size.

Alt text

Downstream Task: QA

Not really so good, considering the 7.5B model size. And inferior accuracy than FiD, they blame the encoder weight not enough in current model.

Alt text

Application on ODQA domain

Pipeline Comparison

We can see that RETRO can easily fit as a dense retriever + neural ranker ODQA pipeline. It can be viewed as single-encoder dense retriever + neural ranker , and the ranker is compute-heavier than ColBERT, both because of model size and the ranker doc encoder cannot be pre-computed.

To put RETRO into the map of ODQA paradigms

Alt text

References

  1. RETRO Is Blazingly Fast

  2. The Illustrated Retrieval Transformer

标签:ranker,language,RETRO,model,ODQA,Retrieval
来源: https://www.cnblogs.com/Matrix_Yao/p/16480698.html