A Celestia Data Availability (DA) proxy, enabling use of the canonical JSON RPC but intercepting and verifiably encrypting sensitive data before submission on the public DA network, and enable decryption on retrieval. Non-sensitive calls are unmodified.
Verifiable encryption is presently enabled via an SP1 Zero Knowledge Proof (ZKP), with additional proof systems planned
Jump to a section:
- Send requests this service: Interact
- Spin up an instance of the service: Operate
- Build & troubleshoot: Develop
Presently all HTTP requests to the proxy are transparently proxied to an upstream Celestia node, interception logic handles these JSON RPC methods:
blob.Submitencrypts before proxy submission of a signed transaction to upstream gRPCappendpoint.blob.Getandblob.GetAllproxy result verifies the Verifiable Encryption proof, and decrypts before forwarding to the client.
First you need to configure your environment and nodes.
The proxy depends on a connection to:
- A [self] hosted Celestia Data Availability (DA) Node and Consensus App Node to submit and retrieve (verifiable encrypted) blob data.
- Easy integration with QuickNode for both nodes at one endpoint, token auth supported.
- (Optional) Succinct prover network as a provider to generate Zero-Knowledge Proofs (ZKPs) of data existing on Celestia. See the ZKP program for details on what is proven. Then any HTTP1 client works to send Celestia JSON RPC calls to the proxy:
# Proxy running on 127.0.0.1:26657
# See: <https://mocha.celenium.io/blob?commitment=S2iIifIPdAjQ33KPeyfAga26FSF3IL11WsCGtJKSOTA=&hash=AAAAAAAAAAAAAAAAAAAAAAAAAFHMGnPWX5X2veY=&height=4499999>
source .env
# blob.Get
curl -H "Content-Type: application/json" -H "Authorization: Bearer $CELESTIA_NODE_WRITE_TOKEN" -X POST \
--data '{ "id": 1, "jsonrpc": "2.0", "method": "blob.Get", "params": [ 4499999, "AAAAAAAAAAAAAAAAAAAAAAAAAFHMGnPWX5X2veY=", "S2iIifIPdAjQ33KPeyfAga26FSF3IL11WsCGtJKSOTA="] }' \
$PBS_SOCKET
# blob.GetAll
curl -H "Content-Type: application/json" -H "Authorization: Bearer $CELESTIA_NODE_WRITE_TOKEN" -X POST \
--data '{ "id": 1, "jsonrpc": "2.0", "method": "blob.GetAll", "params": [ 4499999, [ "AAAAAAAAAAAAAAAAAAAAAAAAAFHMGnPWX5X2veY=" ] ] }' \
$PBS_SOCKET
# blob.Submit (dummy data)
# Note: send "{}" as empty `tx_config` object, so the node uses it's default key to sign & submit to Celestia
curl -H "Content-Type: application/json" -H "Authorization: Bearer $CELESTIA_NODE_WRITE_TOKEN" -X POST \
--data '{ "id": 1, "jsonrpc": "2.0", "method": "blob.Submit", "params": [ [ { "namespace": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAMJ/xGlNMdE=", "data": "DEADB33F", "share_version": 0, "commitment": "aHlbp+J9yub6hw/uhK6dP8hBLR2mFy78XNRRdLf2794=", "index": -1 } ], { } ] }' \
https://$PBS_SOCKET \
--verbose \
--insecure
# ^^^^ DO NOT use insecure TLS in real scenarios!
# blob.Submit (example input ~1.5MB)
cd scripts
./test_example_data_file_via_curl.shCelestia has many API client libraries to build around a proxy.
sequenceDiagram
participant JSON RPC Client
participant PBS Proxy
participant Celestia Node
JSON RPC Client->>+PBS Proxy: blob.Submit(blobs, options)<br>{AUTH_TOKEN in header}
PBS Proxy->>PBS Proxy: Job Processing...<br>{If no DB entry, start new zkVM Job}
PBS Proxy->>-JSON RPC Client: Response{"Call back"}
PBS Proxy->>PBS Proxy: ...Job runs to completion...
JSON RPC Client->>+PBS Proxy: blob.Submit(blobs, options)<br>{AUTH_TOKEN in header}
PBS Proxy->>PBS Proxy: Query Job DB<br>Done!<br>{Job Result cached}
PBS Proxy->>Celestia Node: blob.Submit(V. Encrypt. blobs, options)
Celestia Node->>PBS Proxy: Response{Inclusion Block Height}
PBS Proxy->>-JSON RPC Client: Response{Inclusion Block Height}
sequenceDiagram
participant JSON RPC Client
participant PBS Proxy
participant Celestia Node
JSON RPC Client->>+PBS Proxy: blob.Get(height, namespace, commitment)
PBS Proxy->>Celestia Node: <Passthrough>
Celestia Node->>PBS Proxy: Response{namespace,data,<br>share_version,commitment,index}
PBS Proxy->>PBS Proxy: *Try* deserialize & decrypt
PBS Proxy->>-JSON RPC Client: *Success* -> Response{...,decrypted bytes,...}
PBS Proxy->>JSON RPC Client: *Failure* -> <Passthrough>
sequenceDiagram
participant JSON RPC Client
participant PBS Proxy
participant Celestia Node
JSON RPC Client->>+PBS Proxy: Request{<Anything else>}<br>{AUTH_TOKEN in header}
PBS Proxy->>Celestia Node: <Passthrough>
Celestia Node->>PBS Proxy: <Passthrough>
PBS Proxy->>-JSON RPC Client: Response{<Normal API response}
TODO: notice on single job at a time
- single GPU 100% used per job
- presently no way to scale on multi-GPU
Most users will want to pull and run this service using Docker or Podman via container registry, see running containers.
To build and run, see developing instructions
You can depend fully on providers for proving and DA nodes and run this proxy on a "potato" (any minimal cloud instance should do, the service is extremely lightweight), you likely want to self-host. With providers you must:
- fully trust the prover with all plaintext data - thus no privacy is provided, and if using a prover marketplace, you likely will be revealing that plaintext to the public... not gonna fly with use cases for this product.
- fully trust the DA node to tell you the truth about DA data - as you are not validating consensus with a light node. Likely you also need fail-over in case of DA node providers not being responsive, and blocking your interactions with DA upstream.
To run fully trustless, self-hosted, set of services such that you operate your own prover and Celestia node, you need:
-
A machine to run with a minimum of:
- NVIDIA GPU with 20GB+ of VRAM (Tested on L4)
- Must support CUDA 12+
- 4+ CPU cores
- 16GB+ RAM
- Ports accessible (by default):
- service listening at
TODO - Light client (local or remote) over
26658 - (Optional) Succinct prover network over
443
- service listening at
Example AWS instance: g6.xlarge (single L4 GPU + 8 vCPU)
- NVIDIA GPU with 20GB+ of VRAM (Tested on L4)
-
A Celestia Light Node installed & running accessible on
localhost, or elsewhere. Alternatively, use an RPC provider you trust.- Configure and fund a Celestia Wallet for the node to sign and send transactions with.
- Generate and set a node JWT with
writepermissions and set in.envfor the proxy to use.
Required and optional settings are best configured via a .env file. See example.env for configurable items.
cp example.env .env
# edit .envThe images are available:
# ghcr:
docker pull ghcr.io/celestiaorg/private-blockspace-proxy
# Docker hub:
docker pull celestiaorg/private-blockspace-proxyDon't forget you need to configure your environment.
Note: only required for self-hosting the ZK prover.
As we don't want to embed huge files, secrets, and dev only example static files, you will need to place them on the host machine in the following paths:
- Setup a DNS to point to your instance with email and domain.
- Create and update an
.env(see config. - Select a base OS image to run on the host that includes CUDA Container Toolkit or install it manually.
- See: CUDA Container Toolkit install instructions and AWS NVIDIA docs (or your cloud host's docs for GPU base OS images).
- Run ./scripts/setup_remote_host.sh or otherwise see the scripts to manually configure similarly.
- ONLY for development & testing! copy the unsafe example TLS files from ./service/static to
app/staticon the host- You should use:
TLS_CERTS_PATH=/app/static/sample.pem TLS_KEY_PATH=/app/static/sample.rsa
- ONLY for development & testing! copy the unsafe example TLS files from ./service/static to
Note that scripts run on the host update the /app/.env file with specific settings for the Celestia node.
Logs will print very important information please read those carefully.
With a correct setup of the host, you can startup both the proxy and local celestia node with:
docker compose --env-file /app/.env up -dOr manually just the proxy itself:
# if you are developing from this repo:
just docker-run
# If you are only running:
source .env
mkdir -p $PBS_DB_PATH
# Note socket assumes running "normally" with docker managed by root
docker run --rm -it \
--user $(id -u):$(id -g) \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PBS_DB_PATH:$PBS_DB_PATH \
--env-file {{ env-settings }} \
--env RUST_LOG=pbs_proxy=debug \
--network=host \
-p $PBS_PORT:$PBS_PORT \
"$DOCKER_CONTAINER_NAME"First, some tooling is required:
- Rust & Cargo - install instructions
- SP1 zkVM Toolchain - install instructions
- Protocol Buffers (Protobuf) compiler - official examples contain install instructions
- (Optional) Just - a modern alternative to
makeinstalled - NVIDIA compiler & container toolkit https://docs.succinct.xyz/docs/sp1/generating-proofs/hardware-acceleration#software-requirements
Then:
-
Clone the repo
git clone https://github.com/your-repo-name/private-blockspace-proxy.git cd private-blockspace-proxy -
Choose a Celestia Node
- See the How-to-guides on nodes to run one yourself, or choose a provider & set in
env. - NOTE: You must have the node synced back to the oldest possible height you may encounter in calling this service for it to fulfill that request.
- See the How-to-guides on nodes to run one yourself, or choose a provider & set in
-
Build and run the service
# NOT optimized, default includes debug logs printed just run-debug # Optimized build, to test realistic performance w/ INFO logs just run-release
There are many other helper scripts exposed in the justfile, get a list with:
# Print just recipes
just
Docker and Podman are configured in Dockerfile to build an image with that includes a few caching layers to minimize development time & final image size -> publish where possible. To build and run in a container:
# Using just
just docker-build
just docker-run
# Manually
## Build
[docker|podman] build -t eq_service .
## Setup
source .env
mkdir -p $PBS_DB_PATH
## Run (example)
[docker|podman] run --rm -it -v $PBS_DB_PATH:$PBS_DB_PATH --env-file .env --env RUST_LOG=eq_service=debug --network=host -p $PBS_PORT:$PBS_PORT pbs_proxyImportantly, the DB should persist, and the container must have access to connect to the DA light client (likely port 26658) and Succinct network ports (HTTPS over 443).
The images are built and published for releases - see running containers for how to pull them.
Based heavily on https://github.com/celestiaorg/eq-service.