Skip to content

mixpeek/showcase

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mixpeek Showcase

Real-world examples and demos showcasing Mixpeek's multimodal search capabilities.

What is Mixpeek?

Mixpeek is a multimodal AI platform that lets you build semantic search experiences across images, videos, audio, and text. These examples demonstrate how to ingest various types of content and create powerful search applications.

Examples

Example Description Data Source Features Live Demo
🎨 National Portrait Gallery Semantic image search engine for portrait photography using the National Gallery of Art's open-access collection ~120,000 open-access images from NGA Natural language search, metadata filtering, visual similarity Try it live →
📚 CS50 Learning Multimodal search engine for educational content using Harvard's CS50 course materials 12+ lectures with videos, slides, and code from Internet Archive Cross-modal search, video segment search, code search, lecture discovery Coming soon

Getting Started

Each example includes:

  • Download scripts - Fetch open-access data from public sources
  • Ingestion scripts - Upload content to Mixpeek with metadata
  • Live demos - Try the search experience

Prerequisites

Quick Start

  1. Clone this repository
  2. Navigate to an example directory
  3. Follow the README instructions
  4. Try the live demo

Resources

License

  • Code: MIT License (see individual examples)
  • Data: Each example uses openly licensed data - see individual READMEs for attribution requirements

Contributing

Have an interesting use case? We'd love to see it! Feel free to submit a pull request with your own example.

About

Showcase of interactive mixpeek retriever pages, and how they were built

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages