Website • Telegram • X / Twitter • Farcaster
I help Web3 protocols move from messy on-chain data to autonomous intelligence. 10+ years of engineering experience repurposed for the speed of the blockchain.
- 🏗 Data Architecture: High-throughput indexing (The Graph/Substreams) & Real-time Lakehouses.
- 🤖 Agentic AI: Autonomous on-chain agents using LangGraph, CrewAI, and RAG pipelines.
- ⚡ Velocity: I ship production-grade micro-services in 48-hour sprints.
|
Blockchain Ecosystem • Solidity, Rust, Subgraphs • Dune Analytics, Flipside • RPC Optimization, Mempool Monitoring • Base, Ethereum, Solana |
AI & Data Engineering • LangChain, LangGraph, RAG • Python, Spark, Kafka • Pinecone, Weaviate, Qdrant • Databricks, Snowflake, • AWS, GCP, Azure |
- Tier A: Custom Indexing & Subgraphs ($1,200)
- Tier B: Agentic RAG Bots & On-chain Alphas ($1,500)
- Tier C: Real-time Analytics Dashboards ($900)
- Indexing: Low-latency event stream processing for Base L2 protocols.
- Intelligence: Multi-agent systems for autonomous DAO governance monitoring.
📫 Contact: niranjanagaram@gmail.com or DM via Telegram/Farcaster for instant response.
- Additionally
- 🔭 I’m currently working on AI productization: domain copilots, data-aware agents, and retrieval stacks wired to real KPIs.
- 🌱 I’m currently learning agentic orchestration (graphs), safe tool-use, eval-driven development, and structured reasoning.
- 👯 I’m looking to collaborate on Agents || RAG Platforms || Applied AI.
- 📝 I regularly write on practical AI systems engineering and data-to-LLM workflows for blogs and social.
- 💬 Ask me about Agents, RAG, Evals/Observability, and shipping AI to production.
- 📫 How to reach me niranjanagaram@gmail.com
- ⚡ Hobbies Playing Guitar, singing, fitness and standup comedy.
- LangChain, LangGraph, LangStack, CrewAI for agentic workflows and graph-based orchestration. - RAG: Pinecone, Weaviate, Qdrant, FAISS with hybrid search and rerankers. - Evals/Observability: Ragas, DeepEval, tracing, prompt regression, golden sets. - Serving: vLLM/Ollama, Ray Serve, BentoML with caching/batching for latency/cost.


