DeepScientist is a local-first AI research operating system designed to transform fragmented, manual research tasks into a durable, executable, and cumulative AI workspace. Unlike "research chatbots" that lose context, DeepScientist manages the entire research lifecycle—from literature scouting and baseline reproduction to experiment execution and paper writing—within a structured, version-controlled environment.
The fundamental unit of work in DeepScientist is the Quest. Every research objective (e.g., "Reproduce Paper X" or "Optimize Algorithm Y") is initialized as a dedicated Git repository. This design choice ensures:
DeepScientist provides an end-to-end autonomous research pipeline driven by Stage Skills rather than hard-coded logic docs/en/README.md70-71
| Capability | Technical Implementation |
|---|---|
| Baseline Reproduction | Automated environment setup, dependency handling, and smoke testing README.md97-99 |
| Experimentation | Managed hypothesis branching using Git worktrees and persistent bash_exec sessions docs/zh/README.md19-23 |
| Scientific Writing | Integrated Tiptap/Novel editor with LaTeX support and automated figure generation README.md109-111 |
| Cross-Surface Collaboration | Uniform API access via Web UI, TUI, and external Connectors (WeChat, Telegram, etc.) README.md113-118 |
| Memory & Artifacts | RAG-based memory retrieval and a "Metric Contract" for tracking research progress docs/en/README.md74-75 |
Sources: README.md89-118 docs/en/README.md3-9 docs/zh/README.md19-24
DeepScientist utilizes a hybrid architecture: a Node.js Launcher manages the lifecycle and UI, while a Python Daemon handles the research logic, turn execution, and tool orchestration.
The following diagram bridges the gap between the user's research quest and the underlying system components.
Quest Execution and Data Flow
Sources: docs/en/README.md68-73 docs/zh/README.md114-119 docs/en/19_EXTERNAL_CONTROLLER_GUIDE.md31-40
DeepScientist does not use a rigid state machine. Instead, the PromptBuilder assembles a context-rich prompt that allows the LLM to select and execute specific Skills based on the current research stage.
Skill and Tool Orchestration
Sources: docs/en/README.md70-71 docs/en/19_EXTERNAL_CONTROLLER_GUIDE.md46-58 docs/zh/README.md122-123
This wiki is organized to support both users and developers:
npm install to the running daemon.CodexRunner.bash_exec, artifact, and memory tool semantics.ds command reference.Refresh this wiki
This wiki was recently refreshed. Please wait 3 days to refresh again.