A modern chat application with offline message support and intelligent assistant capabilities, including web search functionality.
- Real-time chat with AI assistants
- Web search integration via Tavily API
- Offline message queuing and synchronization
- Responsive UI design
- Message persistence
- System notifications for connectivity status
/frontend- React frontend application/src- Source code/api- API client and service interfaces/components- Reusable UI components/pages- Page components/services- Business logic services/utils- Utility functions
/RAI_Chat/Backend- Flask backend server/api- API endpoints/components- Business logic components/services- Service layer/modules- Feature modules including web search
/llm_Engine- LLM API server for handling AI model interactions
- Node.js (v14 or higher)
- Python 3.9 or higher
- npm or yarn
- Tavily API key (for web search functionality)
- Create a
.envfile in the/RAI_Chat/Backenddirectory with the following variables:
TAVILY_API_KEY=your_tavily_api_key_here
- Clone the repository
git clone <repository-url>
cd RAI_Chat
- Install dependencies
# Install frontend dependencies
cd frontend
npm install
# Install backend dependencies
cd ../Backend
pip install -r requirements.txt
# Install LLM Engine dependencies
cd ../../llm_Engine
pip install -r requirements.txt
The easiest and most reliable way to run the application is using Docker:
# Make the Docker scripts executable
chmod +x setup_docker_env.sh docker-start.sh
# Start the application with Docker
./docker-start.shThis script will:
- Set up the necessary environment files (if they don't exist)
- Build the Docker images for all components
- Start all services with Docker Compose
- Configure networking between components automatically
Once started, you can access the application at http://localhost:8081 in your browser.
To view logs:
docker compose logs -fTo stop all containers:
docker compose downIf you prefer not to use Docker, you can use the provided start script:
# Make the script executable (if needed)
chmod +x start_app.sh
# Run the start script
./start_app.shThis script will:
- Start the LLM Engine on port 6101
- Start the Backend Server on port 6102
- Start the Frontend on port 8081
- Monitor all components and provide logging
Once started, you can access the application at http://localhost:8081 in your browser.
To stop all components, press Ctrl+C in the terminal where the script is running.
If you prefer to start components individually:
- Start the LLM Engine
cd llm_Engine
python llm_api_server.py --port 6101
- Start the Backend Server
cd RAI_Chat/Backend
python wsgi.py
- Start the Frontend
cd RAI_Chat/frontend
npm start
The frontend is built with React and uses:
- TypeScript for type safety
- Styled Components for styling
- Axios for API requests
- Local storage for message persistence
The backend is built with Flask and uses:
- SQLAlchemy for database interactions
- Flask-CORS for handling cross-origin requests
- JWT for authentication
The LLM Engine provides an API for interacting with various language models.
The application integrates with the Tavily API to provide web search capabilities. When a user asks a question that requires up-to-date information, the system can:
- Detect when web search is needed
- Perform a search using the Tavily API
- Process and incorporate the search results into the AI's response
To use this feature:
- Ensure you have a valid Tavily API key in your
.envfile - Ask questions that might benefit from web search, such as current events or factual information
If you encounter issues starting the application:
- Check the log files in the
/logsdirectory - Ensure all required ports (6101, 6102, 8081) are available
- Verify that all environment variables are set correctly
- If web search isn't working:
- Check that your Tavily API key is valid and properly set in the
.envfile - Look for any error messages in the backend logs related to the Tavily client
- Check that your Tavily API key is valid and properly set in the
This project is proprietary and confidential.