M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding

1UNC Chapel Hill   2Bloomberg
Teaser
Comparison of multi-modal document understanding pipelines. Previous works focus on (a) Single-page DocVQA that cannot handle many long documents or (b) Text-based RAG that ignores visual information. Our (c) M3DocRAG framework retrieves relevant documents and answers questions using multi-modal retrieval and MLM components, so that it can efficiently handle many long documents while preserving visual information.

Abstract

Document visual question answering (DocVQA) pipelines that answer questions from documents have broad applications. Existing methods focus on handling single-page documents with multi-modal language models (MLMs), or rely on text-based retrieval-augmented generation (RAG) that uses text extraction tools such as optical character recognition (OCR). However, there are difficulties in applying these methods in real-world scenarios: (a) questions often require information across different pages or documents, where MLMs cannot handle many long documents; (b) documents often have important information in visual elements such as figures, but text extraction tools ignore them. We introduce M3DocRAG, a novel multi-modal RAG framework that flexibly accommodates various document contexts (closed-domain and open-domain), question hops (single-hop and multi-hop), and evidence modalities (text, chart, figure, etc.). M3DocRAG finds relevant documents and answers questions using a multi-modal retriever and an MLM, so that it can efficiently handle single or many documents while preserving visual information. Since previous DocVQA datasets ask questions in the context of a specific document, we also present M3DocVQA, a new benchmark for evaluating open-domain DocVQA over 3,000+ PDF documents with 40,000+ pages. In three benchmarks (M3DocVQA/MMLongBench-Doc/MP-DocVQA), empirical results show that M3DocRAG with ColPali and Qwen2-VL 7B achieves superior performance than many strong baselines, including state-of-the-art performance in MP-DocVQA. We provide comprehensive analyses of different indexing, MLMs, and retrieval models. Lastly, we qualitatively show that M3DocRAG can successfully handle various scenarios, such as when relevant information exists across multiple pages and when answer evidence only exists in images.

Method

M3DocRAG: A Unified Framework for Multi-modal, Multi-page, Multi-document Understanding

Our M3DocRAG framework consists of three stages: (1) document embedding, (2) page retrieval, and (3) question answering. In (1) document embedding, we extract visual embedding (with ColPali) to represent each page from all PDF documents. In (2) page retrieval, we retrieve the top-K pages of high relevance (MaxSim scores) with text queries. In an open-domain setting, we create approximate page indices for faster search. In (3) question answering, we conduct visual question answering with multi-modal LM (e.g. Qwen2-VL) to obtain the final answer.
Teaser

Dataset

M3DocVQA: A New Benchmark for Open-domain Document Understanding

Comparison of existing DocVQA datasets (left; e.g., DocVQA) and our M3DocVQA dataset (right). In contrast to previous DocVQA datasets that have questions that are specific to a single provided PDF (e.g., “What was the gross profit in the year 2009?”), M3DocVQA has information-seeking questions that benchmark open-domain question answering capabilities across more than 3,000 PDF documents (i.e., 40,000+ pages).
Teaser


Dataset Collection

We extend the question-answer pairs from a short-context VQA dataset to a more complex setting that includes 1) PDF documents and 2) open-domain contexts. We first collect the URLs of all supporting contexts (Wikipedia documents) of individual questions of MultimodalQA. Then, we create PDF versions from their URLs by rendering them in a web browser.
Teaser

Evaluation Results

We benchmark M3DocRAG on three PDF document understanding datasets that represent different scenarios:
  1. M3DocVQA (Open-domain VQA with multi-hop questions across multiple documents)
  2. MMLongBench-Doc (Closed-domain VQA with multi-hop questions across a single document)
  3. MP-DocVQA (Closed-domain VQA with single-hop questions across a single document)

Evaluation on Open-domain DocVQA: M3DocVQA

We observe that our M3DocRAG (ColPali + Qwen2-VL 7B) significantly outperforms text RAG (ColBERT v2 + Llama 3.1 8B), across all different evidence modalities / question hops / # pages. The performance gap is especially big when the evidence involves images, underscoring that M3DocRAG addresses the information loss over non-textual content by text-only pipelines. We also notice that providing more retrieved pages as context generally increase the performance of both textRAG and M3DocRAG.
Teaser
Open-domain DocVQA evaluation results on M3DocVQA. The scores are based on F1, unless otherwise noted. Index: FlatIP + IVFFlat.


Evaluation on Closed-domain DocVQA: MMLongBench-Doc

In MMLongBench-Doc, the models need to handle even longer PDF documents (up to 120 pages) than in MP-DocVQA (up to 20 pages). Table shows that ColPali + Idefics2 surpass Idefics2 without RAG, as well as all previous multi-modal entries. ColPali + Qwen2VL 7B achieves the best scores in overall F1 and most evidence modality/page settings. This demonstrates the effectiveness of multi-modal retrieval over handling many pages by concatenating low-resolution images.
Teaser
Closed-domain DocVQA evaluation results on MMLongBench-Doc. We report the generalized accuracy (ACC) across five evidence source modalities: text (TXT), layout (LAY), chart (CHA), table (TAB), and image (IMG), and three evidence locations: singlepage (SIN), cross-page (MUL), and unanswerable (UNA). The scores from non-RAG methods are from Ma et al.


Evaluation on Closed-domain DocVQA: MP-DocVQA

While the text RAG pipeline (ColBERT v2 + Llama 3.1) falls short compared to existing approaches, all multi-modal RAG pipelines outperform their text-based counterpart. Notably, the M3DocRAG pipeline (ColPali + Qwen2-VL 7B) delivers the state-of-the-art results on MP-DocVQA.

Teaser
Closed-domain DocVQA evaluation results on MP-DocVQA. The RAG methods retrieve a single page to the downstream QA models.

Qualitative Examples

We provide qualitative examples of M3DocRAG (ColPali + Qwen2-VL 7B)'s question answering results on several M3DocVQA examples. Overall, the qualitative examples showcase that M3DocRAG can successfully tackle different questions whose answer sources exist in various modalities.

Answer is stored visually

Teaser
The answer information is only stored visually within the game logo, where a man is leaning on a motorcycle.

Multi-page/document reasoning

Teaser
The question requires multi-page/document reasoning.

Combining retrieved knowledge and the knowledge of the VQA model

Teaser
The VQA component could combine both the retrieved knowledge (Tropi was transferred on 11 July 2017) and its own knowledge (Valencia CF has a logo with a bat) to provide the final answer.

Citation

Please cite our paper if you use our dataset and/or method in your projects.

@journal{Cho2024M3DocRAG,
  author    = {Jaemin Cho and Ozan İrsoy and Debanjan Mahata and Yujie He and Mohit Bansal},
  title     = {M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding},
  year      = {2024},
}