AI any text or file clusterer & sorting
-
Updated
Oct 31, 2024 - Python
AI any text or file clusterer & sorting
An AI book recommendation system built with Streamlit and Ollama. It uses 'nomic-embed-text' for semantic search and 'llama3.2:1b' for generating in-depth analyses of books and user queries.
AI any text or file clusterer & sorting
A RAG application for East West University (Only Science Faculty)
Simple agents are good for 1-to-1 retrieval system. For more complex task we need multi steps reasoning loop. In a reasoning loop the agent can break down a complex task into subtasks and solve them step by step while maintaining a conversational memory.
SOC Analyst Automation using a RAG model integrates a knowledge retrieval system with generative AI to automate SOC Level-1 tasks. It processes server logs, retrieves relevant security insights, and generates accurate responses, enhancing incident analysis, reducing response times, and improving efficiency in handling cybersecurity threats through
A Retrieval-Augmented Generation (RAG) using Llama Model Fine Tune - The system that extracts and embeds Cirebon cuisine knowledge into a PostgreSQL database with pgvector, enabling efficient retrieval and contextual responses using FastAPI and Docker.
🤖 Build a smart AI assistant that learns from any website using a Retrieval-Augmented Generation framework with local models powered by Ollama.
GPU constrained! No More. Microsoft released Phi3 specially designed for memory/compute constrained environments. The model support ONXX CPU runtime which offers amazing inference speed even on mobile cpu.
LlamaTalks is a Spring Boot-based chatbot application leveraging LangChain4j and Ollama for advanced conversational AI with Retrieval-Augmented Generation (RAG) capabilities. It supports streaming responses, conversation management, document ingestion, and persistent chat history.
buddyRAG - a small AI rag chatbot for chatting with your Markdown notes. Yep, you can use it with Obsidian.
Este repositório foi criado para rodar uma IA generativa com RAG 100% local.
A Retrieval-Augmented Generation (RAG) project built with FastAPI, MongoDB, Qdrant, and JWT authentication—featuring secure document uploads, chunking, embeddings, and context-aware AI responses. Designed to be scalable, reliable, and production-ready.
This project demonstrates how to integrate text embeddings using nomic-embed-text and granite-embedding models with PostgreSQL and pgvector. You can perform similarity searches, text analysis, and more.
Chat with the book "Developer Relations" : a simple local RAG AI chatbot using ollama
MCP server that connects Claude to local Ollama models, delegating simple tasks to save tokens for complex reasoning
YuccAI is a voice assistant website for Universitas Ciputra, offering real-time answers to university-related questions via voice commands. With conversation history and topic recommendations, it simplifies access to campus information through innovative AI technology.
"Ask your PDF" ChatBot : Streamlit App, LangChain, llama3, Nomic embeddings
Add a description, image, and links to the nomic-embed-text topic page so that developers can more easily learn about it.
To associate your repository with the nomic-embed-text topic, visit your repo's landing page and select "manage topics."