← Back to Use Cases
Academic · Education

Literature Reviews That Stay Searchable

Researchers and educators need synthesis now and reliable context later. This workflow shows how papers, notes, and findings stay searchable for the next question instead of disappearing into folders.

Electron App · Deep Research

Run The Literature Review As A Searchable Research System

Use Deep Research to synthesize papers, compare findings, and save the topic as a searchable research trail instead of a one-off review.

Research Question
Synthesize the literature, compare key papers, and retain the topic as a searchable session so the next review starts with evidence already organized.
Kendr Electron app Deep Research workspace for literature review
Output package
Cited review, paper comparison table, unresolved questions
The result should be readable, citable, and easy to continue from.
Decision motion
Move from paper collection to usable synthesis
The review should show themes, conflicts, and what still needs work.
Reusable memory
Literature KB for future review
The next reading cycle should inherit the prior one.
Focus
Themes, methods, conflicts, open questions
The run stays centered on synthesis, not just collection.
Source families
Papers, citations, notes
Bring the literature and the team's notes together.
Output package
Cited review, paper table, question list
Keep the result easy to scan and easy to trust.
Knowledge retention
Literature KB + future review
Future questions should build on the same evidence base.

Primary workflow

Deep Research

Synthesize papers, methods, findings, and citations across a body of work.

Secondary value

Knowledge Base

Preserve the literature context for future questions and follow-up reading.

Why Kendr

Research that compounds

The key product promise is that one review makes the next one faster and better informed.

Use with a real topic

Run these prompts on an actual topic area. Later, replace the placeholders below with real literature syntheses, paper tables, and the resulting session memory.

What The Team Needs

A literature review needs to surface consensus, contradictions, methodological differences, and unresolved questions without losing the citations that make the review trustworthy.

What Kendr Should Produce

A cited synthesis, a structured comparison of papers, and a KB session that makes later follow-up questions easier to answer.

Prompt Samples

Run These In Kendr

Prompt 01 · Literature Synthesis
Build a literature review on [topic].

Use the attached papers, references, and web sources if relevant.

I need:
- the main themes or schools of thought
- methodological patterns
- where findings agree or conflict
- the strongest cited evidence
- the most important open questions

Use citations throughout.
Prompt 02 · Paper Comparison
Compare the top papers on [topic].

Produce a structured table with:
- paper and year
- research question
- method
- sample or data source
- key finding
- limitation
- why the paper matters for our review
Prompt 03 · Research Memory
Create or update a knowledge-base session named literature-[topic]-[date].

Store the paper set, extracted notes, citations, and summary artifacts.

Then provide:
- what this literature base now covers
- what questions are easiest to answer from it
- what papers or evidence should be added next
Example Deliverables

What The Team Gets Back

Cited review

Add the literature summary generated from Prompt 01.

Paper comparison table

Insert the structured paper table from Prompt 02.

Open research questions

List the unresolved questions Kendr identifies.

Literature KB

Show the session and what future review cycles inherit.