Draft:Retrieval Augmented Generation (RAG)

From Wikipedia, the free encyclopedia
  • Comment: The RAG method does not alter the user prompt; therefore it is not relevant in an article focused on altering prompts (prompt engineering). numiri (talk) 02:30, 16 February 2024 (UTC)

Retrieval Augmented Generation (RAG) is a neural network method for enhancing language models with information unseen during training so it can perform tasks such as answering questions using that added information [1]. In its original 2020 form [2], the weights in a neural network of a RAG system do not change. In contrast, neuronal weights do change for other Language Model enhancement methods like Fine Tuning . In RAG, a body of new information is vectorized, and select portions are retrieved when the network needs to generate a response.

Flow chart for Retrieval Augmented Generation (RAG). Black lettered boxes show data being changed, and blue lettering show the machinery performing the changes. The boundaries for each stage of R-A-G is not rigid.

The problems that RAG addresses are information staleness and factual accuracy (some times called grounding or hallucinations).

Techniques[edit]

Improvements to the Response can be applied at different stages in the RAG flow.

Encoder[edit]

These methods center around the encoding of text as either dense or sparse vectors. Sparse vectors, which encode the identity of a word, are typically dictionary length and contain almost all zero's. Dense vectors, which aims to encode meaning, are much smaller contain much fewer zero's.

  • several enhancements can be made in the way similarities are calculated in the vector stores (databases). Performance can be improved with faster dot products, approximate nearest neighors, or centroid searches.[3] Accuracy can be improved with Late Interactions [4]
  • hybrid vectors. combine dense vector representations with sparse 1-hot vectors, so that the faster sparse dot products can be used rather than dense ones.[5] Other methods can combine sparse methods (BM25, SPLADE) with dense ones like DRAGON.

Retriever-centric methods[edit]

  • pre-train the retriever using the Inverse Cloze Task.[6]
  • progressive data augmentation. The method of Dragon samples difficult negatives to train a dense vector retriever.[7]
  • Under supervision, train the retriever for a given generator. Given a prompt and the desired answer, retrieve the top-k vectors, and feed those vectors into the generator to achieve a perplexity score for the correct answer. Then minimize the KL-divergence between the observed retrieved vectors probability and LM likelihoods to adjust the retriever.[8]
  • use reranking to train the retriever.[9]

Language model[edit]

Retro language model for RAG. Each Retro block consist of Attention, Chunked Cross Attention, and Feed Forward layers. Black lettered boxes show data being changed, and blue lettering show the algorithm performing the changes.

By redesigning the language model with the retriever in mind, a 25-times smaller network can get comparable perplexity as its much larger counterparts [10]. Because it is trained from scratch, this method (Retro) incurs the heavy cost of training runs that the original RAG scheme avoided. The hypothesis is that by giving domain knowledge during training, Retro needs less focus on domain and can devote its smaller weight resources only on language semantics. The redesigned language model is shown here.

It has been reported that Retro is not reproducible , so modifications were made to make it so. The more reproducible version is called Retro++ and includes in-context RAG.[11]

Chunking[edit]

Converting domain data into vectors should be done thoughtfully. It is naive to convert an entire document into a single vector and expect the retriever to find details in that document in response to a query. There are various strategies on how to break up the data. This is called Chunking.

Different data styles have patterns that correct chunking can take advantage of.
  • Fixed length with overlap. This is fast and easy. Overlapping consecutive chunks help to maintain semantic context across chunks.
  • Syntax based chunks can break document up by sentences. Libraries such as spaCy or NLTK can also help.
  • File format based chunking. Certain file types have natural chunks built in and it's best to respect them. For example, code files are best chunked and vectorized as whole functions or classes. HTML files should leave <table> or base64 encoded <img> elements intact. Similar considerations should be taken for pdf files. Libraries such as Unstructured or Langchain can assist with this method.

    References[edit]

    1. ^ ""What Is Retrieval-Augmented Generation"". blogs.nvidia.com. 15 November 2023.
    2. ^ Lewis, Patrick; Perez, Ethan; Piktus, Aleksandra; Petroni, Fabio; Karpukhin, Vladimir; Goyal, Naman; Küttler, Heinrich; Lewis, Mike; Yih, Wen-tau; Rocktäschel, Tim; Riedel, Sebastian; Kiela, Douwe (2020). ""Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks"". arXiv:2005.11401 [cs.CL].
    3. ^ "faiss". GitHub.
    4. ^ Khattab, Omar; Zaharia, Matei (2020). ""ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"". arXiv:2004.12832 [cs.IR].
    5. ^ Formal, Thibault; Lassance, Carlos; Piwowarski, Benjamin; Clinchant, Stéphane (2021). ""SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval"". arXiv:2109.10086 [cs.IR].
    6. ^ Lee, Kenton; Chang, Ming-Wei; Toutanova, Kristina (2019). ""Latent Retrieval for Weakly Supervised Open Domain Question Answering"". arXiv:1906.00300 [cs.CL].
    7. ^ Lin, Sheng-Chieh; Asai, Akari; Li, Minghan; Oguz, Barlas; Lin, Jimmy; Mehdad, Yashar; Yih, Wen-tau; Chen, Xilun (2023). ""How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval"". arXiv:2302.07452 [cs.IR].
    8. ^ Shi, Weijia; Min, Sewon; Yasunaga, Michihiro; Seo, Minjoon; James, Rich; Lewis, Mike; Zettlemoyer, Luke; Yih, Wen-tau (2023). ""REPLUG: Retrieval-Augmented Black-Box Language Models"". arXiv:2301.12652 [cs.CL].
    9. ^ Ram, Ori; Levine, Yoav; Dalmedigos, Itay; Muhlgay, Dor; Shashua, Amnon; Leyton-Brown, Kevin; Shoham, Yoav (2023). ""In-Context Retrieval-Augmented Language Models"". arXiv:2302.00083 [cs.CL].
    10. ^ Borgeaud, Sebastian; Mensch, Arthur; Hoffmann, Jordan; Cai, Trevor; Rutherford, Eliza; Millican, Katie; George van den Driessche; Lespiau, Jean-Baptiste; Damoc, Bogdan; Clark, Aidan; Diego de Las Casas; Guy, Aurelia; Menick, Jacob; Ring, Roman; Hennigan, Tom; Huang, Saffron; Maggiore, Loren; Jones, Chris; Cassirer, Albin; Brock, Andy; Paganini, Michela; Irving, Geoffrey; Vinyals, Oriol; Osindero, Simon; Simonyan, Karen; Rae, Jack W.; Elsen, Erich; Sifre, Laurent (2021). ""Improving language models by retrieving from trillions of tokens"". arXiv:2112.04426v1 [cs.CL].
    11. ^ Wang, Boxin; Ping, Wei; Xu, Peng; McAfee, Lawrence; Liu, Zihan; Shoeybi, Mohammad; Dong, Yi; Kuchaiev, Oleksii; Li, Bo; Xiao, Chaowei; Anandkumar, Anima; Catanzaro, Bryan (2023). ""Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study"". arXiv:2304.06762 [cs.CL].