A Python application that automatically fetches posts from multiple subreddits of your choosing, generates AI-powered summaries using any OpenAI-compatible model (currently using GPT-120B OSS with Hugging Face inference providers), and sends beautifully formatted email digests to your email.
- 🔍 Multi-Subreddit Fetching: Collects posts from multiple subreddits simultaneously
- 🤖 AI-Powered Summaries: Uses any OpenAI-compatible model to create concise, intelligent summaries
- 📧 Email Delivery: Sends HTML and plain text email digests via Gmail SMTP
- 🎨 Beautiful HTML Templates: Modern, responsive email design
- ⚡ Asynchronous Processing: Efficiently handles large numbers of posts with async/await
- 🚀 Concurrent Operations: Fetches subreddits and processes AI summaries in parallel
- 📊 Smart Organization: Groups posts by subreddit with statistics
- ☁️ Vercel Deployment: Ready for deployment to Vercel with zero configuration
- Python 3.12 or higher
- uv package manager (recommended) or pip
- Reddit API credentials
- OpenAI-compatible API key (currently using GPT-120B OSS with Hugging Face inference providers)
- Gmail account with app password setup
-
Clone the repository
git clone <your-repo-url> cd reddit_news
-
Install dependencies
Using uv (recommended):
uv sync
Using pip:
pip install -r requirements.txt
-
Set up environment variables
Create a
.envfile in the project root:cp .env.example .env
Edit
.envwith your credentials:# Reddit API credentials (get from https://www.reddit.com/prefs/apps) CLIENT_ID=your_reddit_client_id CLIENT_SECRET=your_reddit_client_secret # Gmail credentials GMAIL_EMAIL=your_email@gmail.com GMAIL_APP_PASSWORD=your_gmail_app_password TO_EMAIL=recipient@gmail.com
# Using uv
uv run src/main.py
# Using Python directly
python src/main.pyNote: The application now runs fully asynchronously for improved performance and concurrent processing of Reddit posts and AI summaries.
The application is configured for Vercel deployment with the included vercel.json file. Simply connect your GitHub repository to Vercel and deploy - no additional configuration needed.
Note: The
start_fastapi.shscript is only needed for local development and is not required for Vercel deployment.
Edit the subreddit_list in src/main.py:
subreddit_list = [
"technology", "programming", "LocalLLaMA",
# Add your favorite subreddits here
]Change the number of posts fetched per subreddit:
posts_per_subreddit = 6 # Adjust this valueModify the maximum posts shown in email:
html_email = create_condensed_html_email(
formatted_posts, subreddit_list, max_display=100 # Adjust this
)- Go to Reddit App Preferences
- Click "Create App" or "Create Another App"
- Choose "script" as the app type
- Note down your
client_idandclient_secret
The application uses any OpenAI compatible API. Currently it's configured to use GPT-120B OSS with Hugging Face inference providers.
- Obtain an API key from your preferred OpenAI compatible service
- For Hugging Face inference, visit Hugging Face
- Sign up/login and navigate to your account settings to create an API key
- Ensure your API key has sufficient credits for API usage
- Enable 2-factor authentication on your Gmail account
- Go to Google App Passwords
- Generate an app password for "Mail"
- Use this password (not your regular Gmail password)
- Fetch Posts: The app connects to Reddit API and concurrently fetches recent posts from multiple subreddits using async operations
- AI Processing: Posts are sent to the OpenAI-compatible model in parallel batches for intelligent summarization using async API calls
- Email Generation: Creates both HTML and plain text versions of the digest
- Delivery: Sends the formatted email via Gmail SMTP using async email delivery
Modify src/templates/email_template.py to customize the email appearance:
- Change colors, fonts, and layout
- Add your branding or logo
- Modify the CSS styles
The application uses modern async/await patterns for concurency:
- Concurrent Subreddit Fetching: All subreddits are fetched simultaneously using
asyncio.gather() - Parallel AI Processing: OpenAI-compatible model requests are batched and processed concurrently
- Async Email Delivery: Non-blocking email sending using
aiosmtplib - Efficient Resource Usage: Proper connection management and cleanup
Email not sending:
- Verify Gmail app password is correct
- Check 2FA is enabled on Gmail account
- Ensure "Less secure app access" is not blocking the connection
Reddit API errors:
- Verify Reddit credentials are correct
- Check rate limits (Reddit API has usage limits)
- Ensure subreddit names are spelled correctly
Add debug prints to troubleshoot:
# Add this for more verbose output
import logging
logging.basicConfig(level=logging.DEBUG)The application requires the following Python packages:
transformers>=4.28.0- Hugging Face Transformers library for OpenAI modelstransformers>=4.28.0- Hugging Face Transformers library for OpenAI modelsasyncpraw>=7.8.1- Async Reddit API wrapperaiosmtplib>=3.0.0- Async SMTP email clientpython-dotenv>=1.1.0- Environment variable management
- ✅
async batches-Make code async with Anthropic's async class(Completed) templates- Refactor to move to another moduleLLMs- Add support for more LLMs (Gemini, OpenAI)performance- Add connection pooling and rate limiting optimizations
Made with ❤️ using Reddit API and OpenAI-compatible models