A FastAPI-based backend service that powers an AI programming assistant with chat capabilities, code analysis, error detection, and multi-modal content processing.
- Real-time Chat: Streaming responses with session management
- Multi-Session Support: Create and manage multiple conversation sessions
- Chat History: Persistent conversation storage with MongoDB
- Comment-to-Code Generation: Convert comments into functional code
- Alternative Code Suggestions: Generate multiple implementation approaches
- Error Detection: AI-powered code error analysis and suggestions
- Context-Aware Responses: Leverages conversation history for better suggestions
- Repository Analysis: Git repository cloning and analysis
- PDF Processing: Extract and analyze PDF documents
- Web Content: URL content extraction and processing
- File Upload Support: Handle various file types for analysis
- JWT Authentication: Secure token-based authentication
- User Management: Registration, login, and session handling
- Protected Routes: Role-based access control
- Framework: FastAPI with async/await support
- Database: MongoDB for chat history and user data
- Vector Storage: FAISS for semantic search and embeddings
- AI Integration: Ollama LLM (perlbot3:latest model)
- Authentication: JWT tokens with bcrypt password hashing
- File Processing: PDF loaders, Git integration, web scraping
-
Clone the repository
git clone <repository-url> cd git_check/backend
-
Install dependencies
pip install -r requirements.txt
-
Environment Setup
# Copy and configure environment variables cp .env.example .env # Edit .env with your configuration
-
Start the server
uvicorn main:app --reload --port 8000
Create a .env
file with the following variables:
# Database
MONGODB_URL=mongodb://localhost:27017
DATABASE_NAME=archelon_ai
# JWT Configuration
JWT_SECRET_KEY=your-secret-key
JWT_ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30
# AI Model Configuration
OLLAMA_MODEL=perlbot3:latest
OLLAMA_BASE_URL=http://localhost:11434
# Google Cloud (if using)
GOOGLE_APPLICATION_CREDENTIALS=Config/googlecloud.json
backend/
βββ main.py # FastAPI application entry point
βββ config.py # Configuration management
βββ requirements.txt # Python dependencies
βββ routes/ # API route definitions
β βββ auth_routes.py # Authentication endpoints
β βββ chatHistoryRoutes.py # Chat session management
β βββ chatRoutes.py # Basic chat functionality
β βββ chatRoutesTharundi.py # Advanced chat features
β βββ commentSuggestionRoutes.py # Code suggestion endpoints
β βββ altCodeRoutes.py # Alternative code generation
β βββ errorRoutes.py # Error detection endpoints
β βββ validateContentRoutes.py # Content validation
βββ services/ # Business logic layer
β βββ auth/ # Authentication services
β βββ chatHistory/ # Chat and LLM integration
β βββ loaders/ # Content processing services
βββ models/ # Database models
βββ schemas/ # Pydantic schemas
βββ utils/ # Utility functions
βββ database/ # Database configuration
βββ Controllers/ # Request controllers
βββ vectordb/ # Vector database integration
POST /auth/register
- User registrationPOST /auth/login
- User loginGET /auth/me
- Get current user info
GET /chatHistory/sessions
- List user sessionsPOST /chatHistory/sessions
- Create new sessionDELETE /chatHistory/sessions/{id}
- Delete sessionGET /chatHistory/{session_id}/messages
- Get session messages
POST /chats/chat
- Send chat messagePOST /commentCode/suggest
- Generate code from commentsPOST /validate/error
- Detect code errorsPOST /alt-code
- Generate alternative implementations
POST /upload
- File upload and analysisPOST /process-url
- Web content analysisPOST /analyze-repo
- Git repository analysis
uvicorn main:app --reload --host 0.0.0.0 --port 8000
uvicorn main:app --host 0.0.0.0 --port 8000 --workers 4
# Build image
docker build -t archelon-backend .
# Run container
docker run -p 8000:8000 archelon-backend
- Python: 3.9 or higher
- MongoDB: 4.4 or higher
- Ollama: With perlbot3:latest model installed
- Memory: 8GB+ recommended for AI model operations
- Storage: 10GB+ for model storage and vector databases
Once the server is running, visit:
- Swagger UI:
http://localhost:8000/docs
- ReDoc:
http://localhost:8000/redoc
# Run tests
python -m pytest tests/
# Run specific test
python test.py
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is part of the Programming Assistant AI Bot system. Please refer to the main project license for usage terms.
- Frontend Interface:
../frontend
- React-based web interface - VS Code Extension:
../VsCodeExtension
- IDE integration - Documentation: Full project documentation
Built with β€οΈ using FastAPI and modern AI technologies.