"Not just a Reader, but a Cognitive Recorder."
Welcome to Read it DEEP, the AI-driven platform designed to transform how you interact with research papers. Move beyond passive reading into active knowledge construction with our "Deep Read" philosophy.
Read it DEEP is a dual-engine platform:
- Cognitive Recorder: It tracks your reading path, highlighting, and thought process.
- Research Asset Factory: It refines raw papers into structured assets—methods, datasets, and inspirations.
Powered by LangGraph and state-of-the-art LLMs, we turn your library into a Dynamic Knowledge Graph.
Efficiently bringing knowledge into your system.
The journey begins with our Smart Ingestion pipeline.
- Drag & Drop: Simply drag your PDF into the upload area.
- Mineru Parsing: Our integration with Mineru V4 ensures high-fidelity parsing, preserving layout, formulas, and images as Markdown.
- Real-time Feedback: Watch as your paper goes from
Uploading→Parsing→Indexing.
Your organized research headquarters.
Once ingested, papers appear in your Library.
- Auto-Metadata: We automatically fetch titles, authors, and publication dates.
- Visual Cards: Papers are presented as cards with key details, making retrieval instant.
- Search & Filter: Quickly find papers by keywords or topics.
Focus, connect, and think.
Clicking "Start Deep Reading" activates our signature 3-Column Layout:
| Left: Context | Center: Content | Right: Workbench |
|---|---|---|
| Knowledge Graph & Analysis See how this paper connects to others. |
Zen Reader Distraction-free Markdown rendering with interactive citations. |
Smart Workbench Your active workspace for extracting value. |
- Interactive Citations: Hover over a citation
[1]to see the reference instantly without losing your place. - Translation: Seamlessly switch between original and translated text with a single click.
Where information becomes an asset.
This is the heart of "Deep Reading".
- Method Alchemy (方法炼金台): Select a method description in the text, and the AI extracts parameters, loss functions, and even generates PyTorch pseudocode.
- Data Warehouse (资产仓库): Automatically validates dataset URLs and licenses.
- Idea Canvas (灵感画板): Record your hypotheses and link them directly to the evidence in the text.
Visualizing your second brain.
As you read, the graph evolves.
- Citation Links: See what influenced this paper.
- Similarity Connections: Discover papers in your library with similar concepts, powered by vector embeddings.
- Local-First AI: Powered by local LLMs (vLLM/Ollama compatible) for privacy and speed.
- LangGraph Agents: sophisticated loops for self-correcting extraction and verification.
- Vector Database:
pgvectorintegration for semantic search and graph construction. - Modern Stack: Built with React, Vite, Tailwind, Python FastAPI, and SQLite/PostgreSQL.
Ready to dive deep?
- Upload your first paper.
- Open it in the Reader.
- Activate the Workbench.
- Build your Knowledge Graph.
# 配置环境变量
cp .env.example .env
# 编辑 .env 填写 API Keys
# 启动开发服务器
./start.shcp .env.docker.example .env
./docker-start.sh # 默认: Frontend 3000, Backend 8080
./docker-start.sh 80 8080 # 使用 80 端口GitHub Actions 会在每次推送时自动构建镜像到 GHCR。
# 1. 上传配置到服务器
scp docker-compose.ghcr.yml .env user@server:/opt/readitdeep/
# 2. 在服务器上拉取并启动
cd /opt/readitdeep
docker compose -f docker-compose.ghcr.yml pull
FRONTEND_PORT=80 GITHUB_OWNER=oMygpt docker compose -f docker-compose.ghcr.yml up -ddocker compose -f docker-compose.ghcr.yml pull
docker compose -f docker-compose.ghcr.yml up -d所有数据存储在 ./readit_data/ 目录:
db/- 数据库、papers.json、workbench.jsonuploads/- PDF 和解析结果redis/- 缓存数据
📚 详细部署文档: docs/DOCKER_DEPLOYMENT.md
Read it DEEP — Where reading meets thinking.
This project is licensed under the GNU Affero General Public License v3.0 (AGPL v3).
- Open Source: You are free to use, modify, and distribute this software under the terms of the AGPL v3.
- Copyleft: If you modify this software and distribute it (web service included), you must make your modifications open source under the same license.
- Dual Licensing: For commercial use cases, proprietary integration, or exemptions from AGPL conditions, please contact CHUNLIN@Readit DEEP for a commercial license.
Copyright (C) 2025 CHUNLIN@Readit DEEP.