⚠️ Disclaimer: This project was entirely vibe-coded and has plenty of rough edges. Use at your own risk! There may be bugs, unexpected behavior, and things that just don't work quite right. No guarantees, no warranties—just a fun experiment. Proceed with caution and a sense of adventure. 🚀
A personalized year-in-review experience for your Plex server users—think Spotify Wrapped, but for your media library.
Plex Wrapped uses a three-stage pipeline to create personalized, entertaining wrap experiences:
Data is collected from Tautulli (which connects to your Plex server):
- Watch history and session data
- Playback statistics and device usage
- Media metadata (genres, actors, directors)
- User accounts and artwork (proxied through Tautulli)
The system analyzes each user's viewing patterns including watch time, binge sessions, genre preferences, device usage, and more.
Note: No direct Plex connection required! Tautulli handles everything.
Raw data is transformed into meaningful insights:
- Cross-user rankings: Compare watch time, episodes, movies, and binge sessions across all users
- Binge detection: Automatically identifies marathon viewing sessions
- Genre analysis: Calculates top genres with percentages
- Pattern recognition: Discovers seasonal trends, repeat watches, device preferences
- Personalized stats: Daily averages, longest sessions, most-watched actors/directors
An LLM (OpenAI GPT) transforms dry statistics into fun, engaging narratives:
- Cohesive storytelling: Cards build progressively from basic stats to deeper insights
- Humorous commentary: Witty, teasing descriptions that make stats memorable
- Personalized context: Uses cross-user comparisons ("You're #1!", "The night owl of the group!")
- Progressive reveal: Swipable cards that unfold your viewing story
For a premium visual experience, each card can be rendered as a unique AI-generated image using Google's Nano Banana Pro model:
- Creates visually stunning, personalized card images
- Supports high-resolution outputs with enhanced text rendering
- Falls back to traditional UI if disabled or unavailable
⚠️ Cost Warning: Image generation can get expensive at scale. Each card requires an API call. Consider enabling only for special occasions or key users.
- 📊 Comprehensive Analytics: Detailed statistics from your Plex viewing history
- 🎯 Deep Insights: Beyond basic stats—discover binge streaks, genre preferences, viewing patterns
- 🎨 Beautiful UI: Modern, swipable cards inspired by Spotify Wrapped
- 👥 Multi-User Support: Generate wraps for all users on your Plex server
- 🤖 AI-Powered Cards: LLM-generated cohesive, funny, personalized narratives
- 📈 Cross-User Rankings: See how you stack up against other viewers
- 🖼️ AI Images: Optional AI-generated card visuals (Nano Banana Pro)
- Docker and Docker Compose (recommended)
- Tautulli instance (connected to your Plex server)
- Python 3.9+ (for local development only)
- Node.js 16+ (for local development only)
cp config.yaml.example config.yamlEdit config.yaml:
tautulli:
url: http://your-tautulli-server:8181
api_key: YOUR_TAUTULLI_API_KEY
# OpenAI for fun, engaging card text (recommended)
openai:
api_key: YOUR_OPENAI_API_KEY
enabled: true
# Optional: AI-generated card images (can be expensive)
image_generation:
api_key: YOUR_GOOGLE_AI_API_KEY
enabled: false
# Optional: Custom date range
time_range:
start_date: 2024-01-01
end_date: 2024-12-31
# Optional: Map usernames to friendly display names
name_mappings:
johndoe123: John
janedoe456: Jane
# Optional: Add custom context for AI-generated content
# This is injected into the AI prompt to personalize the tone
custom_prompt_context: "This is a family server - keep it fun and friendly!"
# Optional: Users to exclude from wrap generation
excluded_users:
- Local
- admin# Start containers
docker-compose up -d
# Generate wraps
docker-compose exec backend python pregenerate.py
# View at http://localhost:8765After generation, shareable links are printed to the console:
http://yourserver:8765/w/{token}
Each user gets a unique token. Tokens are stored in data/tokens.json.
Control how cards are displayed using the mode URL parameter:
| Mode | URL | Description |
|---|---|---|
| Auto | /w/{token} |
Prefers AI images if available, else HTML UI |
| Images | /w/{token}?mode=img |
Force AI-generated images (if available) |
| HTML | /w/{token}?mode=html |
Force traditional HTML/UI cards |
By default (no mode parameter), the app automatically uses AI-generated images when present and falls back to HTML cards otherwise.
| Service | Where to Get |
|---|---|
| Tautulli API | Tautulli → Settings → Web Interface → API |
| OpenAI API | platform.openai.com |
| Google AI | aistudio.google.com (for image generation) |
The pregenerate.py script handles the entire pipeline:
# Full pipeline for all users (data → LLM cards → images if enabled)
python pregenerate.py
# Full pipeline for a single user
python pregenerate.py <username>
# Force regenerate even if data/wraps already exist
python pregenerate.py --forceThe generation pipeline has three stages that can be run independently:
| Mode | Flag | Description |
|---|---|---|
| Data Only | --data-only |
Collect raw data from Tautulli, skip LLM/images |
| Cards Only | --cards-only |
Regenerate LLM cards from existing cached data |
| Images Only | --images-only |
Generate AI images for existing wraps |
# Stage 1: Collect raw data only (useful for testing API connection)
python pregenerate.py --data-only
# Stage 2: Regenerate LLM cards using cached data (skip data collection)
python pregenerate.py --cards-only
# Stage 3: Generate images for existing wraps (requires image_generation enabled)
python pregenerate.py --images-only
# Combine with single user
python pregenerate.py <username> --cards-only
python pregenerate.py <username> --images-only --forceNote: Only one mode flag can be used at a time. Running without a mode flag executes the full pipeline.
wraps/— Final wrap JSON files (served to frontend)wraps_data/— Raw data and cross-user insightsgenerated_images/— AI-generated card images (if enabled)data/tokens.json— Token mappings for shareable links
When image_generation.enabled: true, the system uses Google's Nano Banana Pro model to create unique visuals for each card.
- After LLM generates card text, each card is sent to Nano Banana Pro
- Images are generated based on the card's data and narrative
- Saved to
generated_images/{username}/card_{index}.png - Frontend displays the image instead of UI components (with fallback)
The integration uses the google-genai SDK:
from google import genai
from google.genai import types
client = genai.Client(api_key="YOUR_API_KEY")
response = client.models.generate_content(
model="gemini-3-pro-image-preview", # Nano Banana Pro
contents="Your prompt here",
config=types.GenerateContentConfig(response_modalities=["IMAGE"])
)- If image generation fails → falls back to traditional UI
- If disabled → uses standard card UI
- If image fails to load → shows UI instead
# Install backend
pip install -r requirements.txt
# Install frontend
cd frontend && npm install
# Run backend (terminal 1)
python -m uvicorn main:app --reload
# Run frontend (terminal 2)
cd frontend && npm startOr use the startup script:
./start.sh- Frontend: http://localhost:8765
- Backend API: http://localhost:8766
# Start
docker-compose up -d
# View logs
docker-compose logs -f
# Regenerate wraps
docker-compose exec backend python pregenerate.py
# Rebuild after changes
docker-compose up -d --build
# Stop
docker-compose downNote: Docker networking configuration depends on where Tautulli is running:
- Tautulli on host machine: Use
http://host.docker.internal:8181- Tautulli in Docker (same network): Use
http://tautulli-container-name:8181(or service name if using docker-compose)- Tautulli in Docker (different network): Either join the networks or use
host.docker.internalif Tautulli exposes ports to host- Never use Docker bridge IPs (like
172.17.0.10) - they change and won't work across networks
| Endpoint | Description |
|---|---|
GET /api/users |
List all Plex users |
GET /api/wrap/{username} |
Get wrap for user |
GET /api/wrap-by-token/{token} |
Get wrap by shareable token |
GET /api/token/{username} |
Get token for username |
GET /api/generated-image?path={path} |
Serve generated image |
GET /api/health |
Health check |
| Issue | Solution |
|---|---|
| "User not found" | Ensure username matches exactly as in Tautulli |
| Empty data | Check date range in config includes actual watch dates |
| "Failed to connect" | Verify Tautulli is accessible from Docker container |
| Background images missing | Check browser console, verify API URL |
| Token invalid | Regenerate wraps: python pregenerate.py |
If you're getting connection timeouts when both services are in Docker:
- Tautulli on host machine: Use
http://host.docker.internal:8181in config - Tautulli in Docker (same docker-compose):
- Add Tautulli service to the same
networkssection - Use
http://tautulli-service-name:8181(the service name from docker-compose)
- Add Tautulli service to the same
- Tautulli in Docker (separate docker-compose):
- Option A: Join networks - add
external_network: tautulli-networkto plexwrap docker-compose - Option B: Use
host.docker.internal:8181if Tautulli exposes port 8181 to host
- Option A: Join networks - add
- Never use Docker bridge IPs (like
172.17.0.10) - they're ephemeral and won't work
Example for same docker-compose:
services:
tautulli:
# ... your tautulli config
networks:
- plexwrap-network
backend:
# ... existing config
networks:
- plexwrap-network- Port conflicts? Change ports in
docker-compose.yml - After code changes:
docker-compose build --no-cache && docker-compose up -d
# Check token exists
cat data/tokens.json
# Test API
curl http://localhost:8766/api/health
curl http://localhost:8766/api/wrap-by-token/YOUR_TOKEN- Date Range: Default is last year. Customize in
time_rangeconfig - LLM Quality: OpenAI GPT creates much more engaging cards than raw data
- Image Generation: Enable sparingly—it adds significant cost but creates stunning visuals
- Pre-generate: Always pre-generate wraps for best performance
If you find this project useful and would like to support its development, consider:
MIT



