Skip to content

ErenDexter/ResearchQ

ResearchQ

AI-powered, self-hosted systematic literature review platform.

License: MIT Status DOI

ResearchQ turns months of manual paper screening into days of structured, reproducible, AI-assisted research. It combines the OpenAlex academic graph (250M+ papers) with Google Gemini to automate the tedious parts of a systematic literature review — discovery, screening, metadata extraction, and synthesis — while keeping the researcher in full control of every decision.

Every search is logged. Every inclusion decision is recorded. Every AI extraction is traceable. The tool is built around 18 peer-reviewed review methodologies (PRISMA, Cochrane, Kitchenham, JBI, GRADE, and others) and exports everything in the formats journals actually require: BibTeX, CSV, DOCX, Markdown, JSON.

It runs entirely on your own machine (SQLite + Node.js) or, if you prefer, on your own Postgres. No SaaS, no vendor lock-in, your data never leaves your disk.

See FEATURE-DESCRIPTION.md for the full feature pitch.


Table of contents


Project status

ResearchQ is alpha. The core workflow (project setup → search → screening → extraction → review output) is feature-complete, but the tool has not yet undergone peer review and the API surface may change between minor versions until 1.0.0. A JOSS submission is in preparation.


Screenshots

Dashboard

Dashboard

The project dashboard — at-a-glance stats for papers, relevant items, search jobs, and recent activity across the active project. The sidebar project selector lets you jump between isolated research projects; each has its own papers, keywords, and analysis framework.

Powerful, multi-strategy article search

Search strategies

Six search strategies over the OpenAlex graph — topic, journal, author, publisher-scoped database search, and forward/backward citation chains. Every run shows a live progress feed (fetching → storing → AI screening → metadata extraction), and database searches are cursor-resumable so long runs survive interruptions.

Cross-reference matrix

Cross-reference matrix

The heart of any systematic review: every relevant paper side-by-side with title, year, authors, venue, methodology, region, platform, and population. Sort any column, filter by year range or full text, and export the whole table as CSV, BibTeX, or JSON.

Research intelligence

Research intelligence

Beyond simple aggregation — prolific authors with their keyword specialties, co-authorship networks, year-over-year keyword trends, AI-generated gap analysis identifying under-researched areas, and duplicate detection for papers collected from different searches.

Guided literature review execution

Literature review phase stepper

Apply any of the 18 built-in methodologies (PRISMA shown here — or Cochrane, Kitchenham, Arksey & O'Malley, JBI, GRADE, ENTREQ, and more). Each phase has a visible status, a note field, and one-click generation of the expected outputs: PRISMA flow diagrams, data extraction matrices, GRADE evidence tables, narrative syntheses, and 16 other deliverables — all traceable back to your paper data.


Quick start

# 1. Clone and install
git clone https://github.com/ErenDexter/ResearchQ.git
cd ResearchQ
pnpm install

# 2. Configure (at minimum: set an LLM API key)
cp .env.example .env
# then edit .env — see Configuration below

# 3. Create the local SQLite database
pnpm db:push

# 4. Run the dev server
pnpm dev

Open http://localhost:5173 — you'll be redirected straight to the "Create your first project" form. Describe your research domain in plain language, and the AI will generate keywords, analysis dimensions, and relevance criteria in about 30 seconds.

See docs/quickstart.md for a step-by-step walkthrough from install to first exported cross-reference matrix.


Installation

Requirements

  • Node.js ≥ 20
  • pnpm ≥ 9 (or use npm / yarn with equivalent commands)
  • A build toolchain for better-sqlite3 native bindings:
    • macOS: Xcode command-line tools (xcode-select --install)
    • Linux: build-essential and python3
    • Windows: windows-build-tools or Visual Studio Build Tools
  • An LLM API key — one of:

Detailed install guide and troubleshooting: docs/installation.md.

Using Postgres instead of SQLite

SQLite is the default and works out of the box for local use. If you want to deploy ResearchQ as a shared service, set DB_DIALECT=postgres and point DATABASE_URL at your Postgres instance. Then run pnpm db:push:pg to apply the schema. See docs/configuration.md for details.


Configuration

All configuration lives in .env. The authoritative reference is .env.example — copy it to .env and fill in the blanks. Summary:

Variable Purpose Default Required
DB_DIALECT sqlite (local) or postgres (cloud) sqlite No
DATABASE_URL SQLite file path or Postgres connection string ./data/researchq.db No
LLM_PROVIDER gateway (Vercel AI Gateway) or google (Google AI Studio direct) gateway No
LLM Gemini model ID (e.g. google/gemini-2.0-flash or gemini-2.5-flash) google/gemini-2.0-flash No
AI_GATEWAY_KEY Vercel AI Gateway key Yes (gateway mode)
GOOGLE_GENERATIVE_AI_API_KEY Google AI Studio key Yes (google mode)
OPENALEX_EMAIL Your email — gets you into OpenAlex's faster "polite" pool Recommended

Feature overview

Capability What it does
Guided project setup Describe your domain in plain language; the AI generates 15–25 keywords, 5–10 analysis dimensions, and explicit relevance criteria.
Six search strategies Journal, author, topic, database (publisher-scoped), forward citations, backward citations — all over OpenAlex, all logged.
AI relevance screening Gemini classifies every paper against your project's relevance criteria with confidence scores. Manual override always available.
Metadata extraction Summary, methodology, population, region, platform, keywords extracted from each relevant paper's title + abstract.
Cross-reference matrix Sortable, filterable, exportable comparison table across all papers.
18 review methodologies PRISMA, PRISMA-ScR, Cochrane, Kitchenham, Arksey & O'Malley, JBI, PICO/PICOS/PICOT, SPIDER, ENTREQ, Meta-ethnography, GRADE, CASP, Newcastle-Ottawa, SANRA, Whittemore & Knafl, PROSPERO, SLURP, Mapping Studies, plus custom.
20 review outputs PRISMA flow diagrams, data-extraction matrices, GRADE evidence tables, narrative syntheses, coding frameworks, and more — each generated automatically from your paper data.
Research intelligence Prolific authors, co-authorship networks, keyword trends, AI-driven gap analysis, duplicate detection.
Export BibTeX, CSV, JSON, Markdown, DOCX.

Full feature tour: FEATURE-DESCRIPTION.md.


Architecture

ResearchQ is a SvelteKit app with a clean server-side module boundary. The browser is a thin client — all AI calls, database access, and OpenAlex requests happen server-side.

src/
├── lib/
│   ├── server/
│   │   ├── db/                  # Dialect-agnostic schema + client
│   │   │   ├── schema.sqlite.ts # SQLite tables
│   │   │   ├── schema.pg.ts     # Postgres tables
│   │   │   ├── schema.ts        # Runtime shim (picks via DB_DIALECT)
│   │   │   ├── index.ts         # Drizzle client (dual-driver)
│   │   │   └── seed-methodologies.ts
│   │   ├── llm.ts               # Provider-agnostic Gemini factory
│   │   ├── gemini.ts            # Relevance, extraction, classification
│   │   ├── openalex.ts          # OpenAlex API adapter
│   │   ├── project-setup.ts     # AI project wizard
│   │   ├── literature-review.ts # 20 review-output generators
│   │   ├── research-intelligence.ts # Gap analysis, networks, trends
│   │   ├── project-context.ts   # Per-project prompt context
│   │   ├── prompts.ts           # All AI prompts (single source)
│   │   ├── job-queue.ts         # Resumable search pipeline
│   │   └── export.ts            # BibTeX / CSV / JSON / DOCX
│   └── components/ui/           # Svelte 5 UI primitives
└── routes/                      # SvelteKit pages + API routes

Deep dive: docs/architecture.md.


Documentation

Doc Who it's for
docs/installation.md First-time install + troubleshooting
docs/quickstart.md Step-by-step first-project walkthrough
docs/methodologies.md Reference for all 18 built-in review methodologies
docs/architecture.md Server module map + data flow
docs/configuration.md Env vars, LLM/provider tradeoffs, model selection
docs/faq.md Privacy, cost, rate limits, common issues

Contributing

Contributions are welcome — bug reports, feature ideas, docs, and code.

All project participants are expected to follow our Code of Conduct.


Citation

If ResearchQ helps your research, please cite it. Machine-readable metadata is in CITATION.cff; the archived release is on Zenodo at 10.5281/zenodo.19675611.

@software{researchq_2026,
  author  = {Das Prangon, Ranat and Ahmed, Istiaque and Mandol, Souvik},
  title   = {ResearchQ: AI-powered, self-hosted systematic literature review platform},
  year    = {2026},
  url     = {https://github.com/ErenDexter/ResearchQ},
  doi     = {10.5281/zenodo.19675611},
  license = {MIT}
}

A companion paper is in preparation for the Journal of Open Source Software. Once published, the JOSS DOI will be added here and in CITATION.cff as the preferred citation.


License

MIT © 2026 Ranat Das Prangon, Istiaque Ahmed, Souvik Mandol


Acknowledgments

ResearchQ is built on the shoulders of:

AI usage disclosure

Generative AI is a core runtime feature of ResearchQ (relevance screening, metadata extraction, synthesis output generation) and is disclosed in the paper.md AI usage section per JOSS 2026 guidelines. All AI-generated outputs are traceable to the prompts in src/lib/server/prompts.ts and are always reviewable and overridable by the user.

About

ResearchQ turns months of manual paper screening into days of structured, reproducible, AI-assisted research.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors