Multi-Document Analysis
Extract structured answers from up to 50 documents with citations and confidence scores.
Multi-Document Analysis lets you ask structured questions across a large set of documents and get organized results with evidence, citations, and confidence scores. Upload up to 50 documents, define your queries, and Irys extracts answers into a reviewable matrix.
Starting an analysis
Query answer types
When defining queries, choose the answer type that best fits what you are looking for:
Free text
Open-ended extraction
"What are the termination provisions?"
Yes / No
Binary determination
"Does this agreement contain a non-compete clause?"
Date
Date extraction
"What is the effective date?"
Currency / Number
Monetary or numeric values
"What is the total contract value?"
Multiple choice
Selection from defined options
"What is the governing law? (NY / CA / DE / Other)"
List
Multiple items
"List all parties to the agreement."
Reusable query templates
Save sets of queries as templates to reuse across future analyses. This is useful when you run the same type of review repeatedly — due diligence checklists, compliance audits, or standard contract reviews.
Save the current query set as a template.
Load a saved template to pre-fill queries for a new analysis.
Edit templates at any time.
Templates save significant time on recurring workflows. Build a template once for your standard review checklist and apply it to every new deal or case.
Results matrix
Once the analysis completes, results are displayed in a structured matrix:
Columns represent each document.
Rows represent each query.
Cells contain the extracted answer for that document-query pair.
Scroll horizontally to compare answers across documents and vertically to review all queries for a single document.
Evidence panel
Click on any cell in the results matrix to open the evidence panel, which provides:
Source text
The exact passage from the document that the answer was extracted from.
AI reasoning
An explanation of how Irys arrived at the answer.
Page citations
The specific page number(s) where the evidence was found.
Confidence score
A rating of High, Medium, or Low indicating how confident the AI is in the extracted answer.
Confidence scores help you prioritize review. Focus manual attention on Low and Medium confidence answers where the AI may need human verification.
Jump to source
From the evidence panel, click Jump to Source to navigate directly to the relevant page in the original document. This lets you verify the extraction in full context without searching through the document manually.
Exporting results
Export your analysis results in the format that fits your workflow:
Sharing a formatted report with stakeholders.
DOCX
Editing and annotating results in a word processor.
XLSX
Working with results in a spreadsheet for sorting, filtering, and further analysis.
Click Export from the results view and select your preferred format.
Last updated