built by Citizen Web3 for ValidatorInfo
- Cosmos Hub - Development π§
- Aztec Protocol - Production β
- Overview
- Supported Networks
- Features
- Architecture
- Requirements
- Installation
- Quick Start
- Configuration
- Usage
- Makefile Shortcuts
- Troubleshooting
- Development Notes
- Contributing
- License
Chain Data Indexer (CDI) is a high-performance, modular blockchain data indexer designed for powering block explorers, analytics platforms, DeFi dashboards, compliance tools, and research projects.
It extracts, processes, and stores blockchain data from various networks into a PostgreSQL database, enabling fast and flexible querying.
- π§ Primary Use Case: Powering block explorers with rich, searchable blockchain data.
- π Extensible: Suitable for analytics, compliance, DeFi, R&D, and more.
- π Multi-Network: This is a monorepo with indexers for multiple blockchain networks.
CDI supports multiple blockchain networks. Each network has its own dedicated branch with specialized implementation:
| Network | Branch | Status | Description |
|---|---|---|---|
| Cosmos Hub | main |
β Production | Full indexer for cosmoshub-4 with Protobuf decoding, transaction parsing, and PostgreSQL storage |
| Aztec Protocol | aztec |
π§ Development | High-performance L2 indexer with REST API, Kafka streaming, and parallel block processing (270-280 blocks/sec) |
To work with a specific network indexer, switch to the corresponding branch:
# For Cosmos Hub indexer (this branch)
git checkout main
# For Aztec Protocol indexer
git checkout aztecπ‘ Note: Each branch contains network-specific configuration, schemas, and documentation. Make sure to read the branch-specific README for detailed setup instructions.
Note: The features below are specific to the Cosmos Hub indexer. For other networks, please refer to the respective branch documentation.
- π High Performance: Efficiently processes large volumes of blocks and transactions.
- π Resumable Indexing: Smart resumption from the last indexed block to prevent data loss.
- π³ Dockerized: Simple deployment with Docker Compose.
- ποΈ PostgreSQL Integration: Robust, scalable storage with partitioning and indexing.
- π Advanced Decoding: Supports rich message/transaction type extraction.
- β‘ Real-time Capable: Block-by-block processing with adjustable concurrency.
- π Modular Branches: Each supported network can be developed and maintained independently.
- RPC Client: Interfaces with blockchain RPC endpoints.
- Message Decoder: Dynamically generates message type definitions for supported chains.
- Database Layer: Optimized PostgreSQL schema with automatic partitioning.
- Configuration System: Environment-based, validated configuration.
- Node.js (v22+ recommended or v22.18.0 LTS for the best experience)
- yarn
- Docker & docker-compose
git clone https://github.com/citizenweb3/indexer.git
cd indexeryarn install --frozen-lockfile-
Copy and configure your environment:
cp .env.example .env # Edit .env as needed -
Build and start all services:
docker compose --env-file .env up --build -d
-
View indexer logs:
docker compose logs -f indexer
By default, the indexer will resume from the last processed block (
RESUME=true) and use Postgres as the sink.
docker compose down -vdocker compose --env-file .env up -d dbAll configuration is managed through environment variables.
See .env.example for a complete list.
| Variable | Description | Example |
|---|---|---|
| PG_HOST | PostgreSQL host | localhost |
| PG_PORT | PostgreSQL port | 5432 |
| PG_USER | PostgreSQL user | blockchain |
| PG_PASSWORD | PostgreSQL password | password |
| PG_DATABASE | PostgreSQL database name | indexerdb |
| RPC_URL | Blockchain RPC endpoint | https://rpc.cosmoshub-4-archive.citizenweb3.com |
| SINK | Data sink type | postgres |
| RESUME | Resume from last indexed block | true |
| NODE_OPTIONS | Node.js runtime options | --max-old-space-size=24576 |
-
Install dependencies:
yarn install --frozen-lockfile
-
Create a
.envfile:cp .env.example .env # Edit as necessary -
Generate runtime artifacts:
npx tsx scripts/gen-known-msgs.ts
-
Run Postgres (via Docker):
make up
-
Start the indexer:
npm run start
Need more memory?
export NODE_OPTIONS=--max-old-space-size=24576
make upβ Start db via docker-composemake downβ Stop servicesmake resetβ Remove volumes and re-init DBmake logsβ Show DB logs (docker compose --env-file .env logs -f db)make psqlβ Execpsqlinside the Postgres containermake psql-file FILE=path/to/script.sqlβ Copy and run a SQL file inside the DB container
- Indexer fails due to memory? Increase
NODE_OPTIONS. - Check your
.envfor correct DB and RPC settings. - Use
make resetto reinitialize your database if needed.
- Runs TypeScript directly via
tsxduring development. - No tests by default; please add smoke tests for core logic changes.
- See Makefile and Docker Compose files for advanced operations.
Contributions are welcome!
Open issues/PRs for improvements, bug fixes, or new features.
For significant changes, please open an issue to discuss your ideas first.
BE GOOD License for details.