RAGE is a tool for evaluating how well Large Language Models (LLMs) cite relevant sources in Retrieval Augmented Generation (RAG) tasks.
RAGE is a framework designed to evaluate Large Language Models (LLMs) regarding their suitability for Retrieval Augmented Generation (RAG) applications. In RAG settings, LLMs are augmented with documents that are relevant to a given search query. The key element evaluated is the ability of an LLM to cite the sources it used for answer generation.
The main idea is to present the LLM with a query and with relevant, irrelevant, and seemingly relevant documents. Seemingly relevant documents are from the same area as the relevant documents but don't contain the actual answer. RAGE then measures how well the LLM recognized the relevant documents.
Figure 1: RAGE Evaluation Process. Examples are extracted from the Natural Questions Dataset.
For a more detailed description of the inner workings, dataset creation and metrics, we refer to our paper:
→ Evaluating and Fine-Tuning Retrieval-Augmented Language Models to Generate Text With Accurate Citations
Pip:
pip install rage-toolkitBuild from source:
$ git clone https://github.com/othr-nlp/rage_toolkit.git
$ cd rage_toolkit
$ pip install -e .We recommend starting at the rage_getting_started.ipynb Jupyter Notebook.
It gives you a quick introduction into how to set up and run an evaluation with a custom LLM.
Note that RAGE works with any datasets that comply with our format. Feel free to create your own datasets that suit your needs.
For guidance on creating one, take a look at our preprocessed examples or refer to our paper.
Our datasets are built on top of those from the BEIR Benchmark (BEIR Benchmark).
Our preprocessed datasets can be found here:
| Original Dataset | Website | RAGE version on Huggingface |
|---|---|---|
| Natural Questions (NQ) | https://ai.google.com/research/NaturalQuestions | RAGE - NQ |
| HotpotQA | https://hotpotqa.github.io/ | RAGE - HotpotQA |
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Feel free to open issues or submit pull requests for improvements.