Based on LangChain framework
state-of-the-art

✅ - AI Chat App running on local network
✅ - File upload
✅ - Web Search
❌ - Indexing local files
❌ - RAG Orchestrator
- Install Ollama
- Pull some models to mess with
ollama pull deepseek-r1:14b # reasoning model
ollama pull mistral:latest # general purpose- Install Docker
Here's a sample of the final result:

# Pull the latest image
docker pull ghcr.io/open-webui/open-webui:main
# Volume to persist data
docker volume create open-webui-data
# Run the container
docker run -d -p 6969:8080 \
-v open-webui-data:/app/backend/data \
--name open-webui \
ghcr.io/open-webui/open-webui:main# Check container status
docker ps
# Get your IP
ipconfig getifaddr en0Visit http://<your_ip>:6969 (or http://localhost:6969) in your browser.
Note:
On first run, you will be prompted to create an admin account. Other user accounts can be created later on.
Go to Settings > General > System Prompt and set it to your liking.
You are the most efficient AI Assistant that answers following these principles:
- Casual, straight-to-the-point responses.
- Prioritise IT best practices and performance.
- Provide trade-offs when applicable.
- Fact-check and provide sources when needed.Go to Settings > Advanced and set the following parameters:
so it knows how many R are in strawberry
Mirostat: 0 # no randomness
Top K: 10 # choose within the top 10 tokens
Frequency Penalty: 1 # no recall/repeat
Max tokens (num_predict): 4096 # to be safeNote:
We are using Google Search Engine here, but other search engines are available. Full documentation is available here
- Create a Google Custom Search Engine
- Grab the
Search Engine IDcreated - Get the
API Keyfrom Custom Search JSON API
Go to Settings > Admin settings > Web Search
Set the Web Search Engine to be google_pse and fill in the API Key and Search Engine ID.
Search Result Count: 5
Concurrent Requests: 6