What is Quick Report?
Quick Report (Autopilot) is a one-click pipeline that runs the entire systematic review workflow automatically using sensible default settings. Instead of manually stepping through screening → extraction → analysis → report, click the 🚀 button and let AI4Meta handle it.
Pipeline Steps
The autopilot runs these steps sequentially:
| Step | What it does | Default settings | |------|-------------|-----------------| | 1. Screening | Multi-LLM deliberation screening of all pending papers | Uses project protocol (PICO, inclusion/exclusion criteria) | | 2. Codebook | Generates data extraction variables from your protocol | AI-generated from PICO + research questions | | 3. Extraction | Extracts data from all included papers using the codebook | LLM extraction, fills missing cells only | | 4. Dataset | Compiles extractions into an analysis-ready dataset | Auto-confirms for analysis | | 5. Analysis | Runs random-effects meta-analysis (REML) | REML estimator, 95% CI, no moderators | | 6. Report | Generates a full report with all available sections | All applicable sections included |
Smart Skipping
Each step checks if it's already been completed:
- Papers already screened → screening is skipped
- Codebook exists → generation is skipped
- All cells already extracted → extraction is skipped
- Dataset already confirmed → compilation is skipped
- Analysis results exist → analysis is skipped
This means you can safely re-run the pipeline after making manual changes.
How to Use
- Set up your project with papers and a protocol (PICO, research questions)
- Click the 🚀 Quick Report button in the workspace toolbar
- Review the pipeline steps in the confirmation dialog
- Click Run Autopilot to start
- Watch progress in the button and chat panel
- When complete, open the Report tab to view results
When to Use Autopilot vs Manual Steps
Use Autopilot when:
- You want a quick overview of your data
- Your review is straightforward (clear PICO, standard outcomes)
- You're exploring a dataset for the first time
- You want a draft report to iterate on
Use manual steps when:
- You need custom screening criteria per paper
- Your codebook requires domain-specific variables
- You want to review extractions before analysis
- You need specific analysis settings (moderators, subgroups)
- Your review has complex methodology requirements
Limitations
- Uses default settings for all steps (REML random-effects, no moderators)
- Screening uses AI consensus — may flag papers for human review
- Extraction quality depends on abstract/full-text availability
- Report is a starting draft — always review and edit before publishing
- Cannot customize individual step parameters during autopilot run