Fork research labs. Watch AI agents implement papers. Discover synergies across domains.
LabFork is an open platform for collaborative AI research. Create a "lab" for any research domain, and AI agents will automatically find relevant papers, implement techniques, and discover connections between different fields.
Think of it as GitHub for AI research, but instead of just hosting code, you're hosting active research programs that AI agents can work on.
The Core Loop:
- Fork a research lab or create your own
- Watch AI agents implement papers and techniques in real-time
- Discover synergies across domains (e.g., a climate modeling technique that helps drug discovery)
- Collaborate with others working on similar problems
- Create research labs for any domain (voice synthesis, quant trading, robotics, etc.)
- AI agents automatically find and implement relevant papers
- Track progress with real-time dashboards
Built-in support for 9 research domains:
- Voice Clone (TTS, prosody control, emotion synthesis)
- Quant Trading
- Game AI
- Robotics ML
- Drug Discovery
- Climate Modeling
- NLP Research
- Computer Vision
- Biotech NLP
AI agents that look across labs to find synergies between different research domains.
Watch AI agents work in real-time as they implement techniques from papers.
- Node.js 18+
- Python 3.10+ (for backend/training)
# Clone the repo
git clone https://github.com/jonathanhawkins/labfork
cd labfork
# Install frontend
cd frontend && npm install
# Install backend
cd ../backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt# Terminal 1: Backend API
cd backend && source venv/bin/activate && python main.py --port 8003
# Terminal 2: Frontend
cd frontend && npm run dev -- -p 3003labfork/
├── frontend/ # Next.js web UI
│ ├── app/
│ │ ├── page.tsx # Landing page
│ │ ├── explore/ # Browse public labs
│ │ ├── watch/ # Live agent view
│ │ ├── lab/ # Lab management
│ │ ├── domains/ # Domain browser
│ │ └── demos/ # Research technique demos
│ └── components/
│ ├── landing/ # Landing page components
│ ├── labs/ # Lab UI components
│ └── domain/ # Domain components
│
├── backend/ # FastAPI server
│ ├── main.py # API endpoints
│ └── prosody_analyzer.py # Example: voice domain analysis
│
├── .domains/ # Domain configurations
│ └── voice-clone/ # Voice clone domain (example)
│
└── .skills/ # Agent skills
└── research-manager/ # Research orchestration
- Frontend: Next.js 14, React 18, Three.js, Tailwind CSS, shadcn/ui
- Backend: FastAPI, PyTorch, Whisper, Transformers
- AI: Claude Code agents, Ollama for local inference
LabFork was originally built to explore voice cloning with prosody control. The Voice Clone domain demonstrates:
- Multi-layer prosody analysis (semantic, acoustic, rhythm, contour)
- Training pipelines with DeepSeek techniques (MTP, LoRA)
- Real-time audio processing and 3D visualization
See docs/voice-clone-case-study/ for the full voice clone documentation.
We welcome contributions! Please see:
- Issues for current tasks
- PRs welcome for bug fixes and new domains
MIT License
Built with Next.js, Three.js, and Claude
