- LLM RAG
created by
binarybana5 days ago
Easy RAG scripts for a local, embedded, MCP-enabled knowledge store.
Information
LLM RAG
A RAG (Retrieval Augmented Generation) implementation using LlamaIndex for document processing, Gemini for embeddings, and LanceDB for vector storage.
Setup
This project uses uv
for dependency management and direnv
for environment management. To get started:
- Install dependencies:
# Create and activate a new virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv pip install -e .
- Set up environment:
# Create .env file with your Google API key
echo "GOOGLE_API_KEY=your_key_here" > .env
# Allow direnv to load the environment
direnv allow
Usage
Data Ingestion
python -m llm_rag.ingest --source /path/to/source --type [code|url|pdf]
Search Server
python -m llm_rag.search --db /path/to/lancedb
Recommended Clients
MCP CLI ClientEen lokale MCP host en client die met meerdere LLM's en meerdere MCP servers kan werken.
Flask Webapplicatie met LLM-integratie en MCP-toolsFlask webapplicatie met LLM-integratie en MCP-tools voor het verwerken van prompts via verschillende AI-modellen en contextuele tools.
Mcp_agent_streamlit_rag
Python MCP Client
research
健康管理系统
MCP_LLM使用大模型结合mcp协议
MCP ClientA very simple MCP demo, based off of Anthropics MCP examples, with the added bonus of an agency loop
Cursor Apple Notes IndexerAn MCP app for Cursor that searches and indexes Apple Notes locally
Mattermost MCP Client