Real-world examples and demos showcasing Mixpeek's multimodal search capabilities.
Mixpeek is a multimodal AI platform that lets you build semantic search experiences across images, videos, audio, and text. These examples demonstrate how to ingest various types of content and create powerful search applications.
| Example | Description | Data Source | Features | Live Demo |
|---|---|---|---|---|
| 🎨 National Portrait Gallery | Semantic image search engine for portrait photography using the National Gallery of Art's open-access collection | ~120,000 open-access images from NGA | Natural language search, metadata filtering, visual similarity | Try it live → |
| 📚 CS50 Learning | Multimodal search engine for educational content using Harvard's CS50 course materials | 12+ lectures with videos, slides, and code from Internet Archive | Cross-modal search, video segment search, code search, lecture discovery | Coming soon |
Each example includes:
- Download scripts - Fetch open-access data from public sources
- Ingestion scripts - Upload content to Mixpeek with metadata
- Live demos - Try the search experience
- Python 3.7+
- Mixpeek API Key (Sign up for free)
- Clone this repository
- Navigate to an example directory
- Follow the README instructions
- Try the live demo
- Code: MIT License (see individual examples)
- Data: Each example uses openly licensed data - see individual READMEs for attribution requirements
Have an interesting use case? We'd love to see it! Feel free to submit a pull request with your own example.