A practical hub for operators. We’ve collected the most useful Celestia and Ethereum node commands, systemd snippets, and quick links to public API/RPC endpoints. Copy, paste, and ship.
Mainnet Beta & Mocha Testnet • Bridge / Light Nodes • Keys • Validators • Service control
Public endpoints curated from provider directories and official docs. Use your own/private endpoints in production.
Full node snapshot optimized for bridge nodes to seed EDS data quickly.
https://archive-snap.astrosynx.com/celestia-mainnet/snap_archive.tar.lz4Snapshot of consensus state for fast recovery of consensus nodes.
https://archive-snap.astrosynx.com/celestia-mainnet/snap_archive.tar.lz4Smaller snapshot with pruned history for nodes that prefer reduced disk usage.
https://mainnet-snap.astrosynx.com/celestia/snap_mainnet.tar.lz4Snapshots let operators restore nodes quickly by providing a consistent state archive. For bridge nodes this is particularly useful to seed EDS (erasure data shares) and resume serving data without replaying the entire history.
Using snapshots reduces recovery time, lowers bandwidth and disk churn during resync, and helps teams recover from disk failures or migration tasks with predictable steps.
Suggested flow: stop the service → replace data folder with snapshot contents → start service and monitor logs.
Public mainnet RPC + curated snapshots to bootstrap archive or pruned nodes fast.
Use this endpoint for light integrations, monitoring and quick diagnostics. Prefer private peers for production validator setups.
https://polkadot-mainnet-rpc.astrosynx.com/Choose a pruned snapshot for validator / RPC nodes, or archive if you need full historical state for analytics.
Always verify chain height and hash after restore before exposing RPC to users.
Our validator and node infrastructure is hosted in managed facilities in Malaysia. We choose locations that provide robust power, low-latency network connectivity across APAC, and strong physical security controls.
Base server configuration used across our fleet (standardized to simplify maintenance and monitoring):
Additional fleet-level choices:
Simple bash helpers to detect stuck heights, peer loss and auto-restart your Polkadot node.
#!/usr/bin/env bash
NODE_RPC="http://127.0.0.1:9933"
STATE_FILE="/tmp/polkadot_height"
THRESHOLD_MIN=10
current_height() {
curl -s -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"chain_getHeader","params":[]}' \
"$NODE_RPC" | jq -r ".result.number" | sed "s/^0x//" | xargs printf "%d\n"
}
now_ts=$(date +%s)
height=$(current_height)
[ -z "$height" ] && echo "No height from RPC" && exit 1
if [ -f "$STATE_FILE" ]; then
last_height=$(cut -d" " -f1 "$STATE_FILE")
last_ts=$(cut -d" " -f2 "$STATE_FILE")
delta_min=$(( (now_ts - last_ts) / 60 ))
if [ "$height" -le "$last_height" ] && [ "$delta_min" -ge "$THRESHOLD_MIN" ]; then
echo "Height stuck at $height for $delta_min min — restarting polkadot.service"
sudo systemctl restart polkadot.service
fi
fi
echo "$height $now_ts" > "$STATE_FILE"
Script polls local RPC, stores latest height and timestamp, and if the height doesn’t move for
THRESHOLD_MIN minutes, it restarts polkadot.service.
/usr/local/bin/polkadot-watch-height.sh & make executable.cron (e.g. every 5 minutes) for lightweight self-healing.NODE_RPC and threshold to your environment.#!/usr/bin/env bash
NODE_RPC="http://127.0.0.1:9933"
MIN_PEERS=10
peers=$(curl -s -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"system_health","params":[]}' \
"$NODE_RPC" | jq -r ".result.peers")
[ -z "$peers" ] && echo "No data from system_health" && exit 1
echo "Current peers: $peers"
if [ "$peers" -lt "$MIN_PEERS" ]; then
echo "Peers below threshold ($MIN_PEERS) — restarting polkadot.service"
sudo systemctl restart polkadot.service
fi
Polls system_health and restarts the node when connected peers drop below MIN_PEERS.
Useful when networking occasionally degrades or upstream routing flaps.
Combine with external monitoring (Prometheus / Grafana / Alertmanager) so you also get alerts when auto-restarts happen too often.
sudo systemctl edit polkadot.service <<EOF
[Service]
Restart=on-failure
RestartSec=5
StartLimitIntervalSec=600
StartLimitBurst=5
EOF
sudo systemctl daemon-reload
sudo systemctl restart polkadot.service
Systemd-level tuning that ensures your node comes back automatically on crashes,
while StartLimit* guards against infinite restart loops.
Pair this with the height/peers watchdog scripts for a pragmatic, production-friendly self-healing setup.
Use these endpoints to cross-check heights, peers and state against your own nodes when debugging incidents.
Point tooling and one-off checks to the open cluster instead of your validator nodes during investigations.
https://polkadot-mainnet-rpc.astrosynx.com/
Ideal for subscribing to finality, heads and health streams from a neutral reference node set.
wss://polkadot-mainnet-rpc.astrosynx.com/
chain_getHeader height vs your node when suspecting stalls.system_health peers to distinguish local vs network-wide issues.Bridge & Light nodes • Keys • Validators • SystemD
Core Celestia recipes for light & bridge nodes, snapshots and common maintenance tasks.
celestia light init
# Mocha testnet:
celestia light init --p2p.network mocha
Initializes a Light DA node store and config. Add --p2p.network mocha for testnet. After init, start the node pointing to a validator gRPC endpoint (typically port 9090).
celestia light start --core.ip rpc.celestia.pops.one --core.port 9090
celestia light start --core.ip rpc-mocha.pops.one --core.port 9090 --p2p.network mocha
# Init:
celestia bridge init --core.ip <RPC_HOST> --core.port 9090
# Start (Mocha):
celestia bridge start --core.ip rpc-mocha.pops.one --core.port 9090 --p2p.network mocha
The bridge node connects consensus (celestia-app) to the DA network and serves EDS shares to light nodes. It needs a trusted gRPC endpoint and P2P ports (2121 TCP/UDP).
Tip: ensure gRPC is enabled in app.toml on your consensus node.
# Build cel-key:
make cel-key
# Add key (Light/Mocha):
./cel-key add my_celes_key --keyring-backend test --node.type light --p2p.network mocha
# List keys:
./cel-key list --node.type light --keyring-backend test --p2p.network mocha
cel-key manages keys used by celestia-node. Keep exported keys/passwords secure. Active address available via celestia state account-address.
sudo systemctl stop celestia-appd
rm -rf ~/.celestia/data
curl -L <SNAP_URL> | lz4 -dc | tar -x -C ~/.celestia
sudo systemctl start celestia-appd
journalctl -u celestia-appd -f
Fast recovery of celestia-appd from snapshot. Replace <SNAP_URL> with provider snapshot link.
sudo tee /etc/systemd/system/celestia-bridge.service >/dev/null <<EOF
[Unit]
Description=celestia-bridge
After=network-online.target
[Service]
User=$USER
ExecStart=$(which celestia) bridge start
Restart=on-failure
RestartSec=3
LimitNOFILE=1400000
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable celestia-bridge
sudo systemctl start celestia-bridge && sudo journalctl -u celestia-bridge.service -f
Runs your bridge node as a background service with restart policy and increased file descriptor limit. Reuse structure for other celestia services.
# service status & logs (bridge):
sudo systemctl status celestia-bridge
sudo journalctl -u celestia-bridge.service -f
# Tendermint sync info (consensus RPC):
curl -s localhost:26657/status | jq .result.sync_info
Minimal checks to verify liveness and sync. Replace host/ports to match your setup when using remote endpoints or containers.
Common recipes for execution clients (geth), consensus clients (Lighthouse), validator management and quick RPC checks.
# Download and run geth (example):
wget https://gethstore.blob.core.windows.net/builds/geth-linux-amd64-1.13.5-...tar.gz
tar -xzf geth-*.tar.gz
sudo mv geth*/geth /usr/local/bin/geth
geth --http --http.addr 0.0.0.0 --http.port 8545 --http.api eth,net,web3,txpool --syncmode snap --datadir /var/lib/geth
Use the official geth release URL for your version/architecture. --syncmode snap uses snapshot sync for fast bootstrap. Expose RPC only behind firewalls or reverse proxies with authentication.
# Install Lighthouse (example):
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/sigp/lighthouse.git && cd lighthouse
make
# Run beacon node (mainnet):
lighthouse bn --network mainnet --http --http-address 0.0.0.0 --execution-endpoints http://127.0.0.1:8551
Lighthouse is a popular Rust-based consensus client. Configure the execution endpoint (geth or other) and enable the HTTP API for validator clients. Use the --datadir flag to control storage location.
# Start validator client:
lighthouse vc --network mainnet --beacon-node http://127.0.0.1:5052 --datadir /var/lib/lighthouse/validator
Key generation / deposits are outside this snippet — use the official deposit-cli or recommended tooling. After deposit, place your keystores and enable the validator client to sign duties.
sudo tee /etc/systemd/system/geth.service >/dev/null <<EOF
[Unit]
Description=Geth Ethereum node
After=network-online.target
[Service]
User=$USER
ExecStart=/usr/local/bin/geth --http --http.addr 127.0.0.1 --http.port 8545 --http.api eth,net,web3 --syncmode snap --datadir /var/lib/geth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable geth
sudo systemctl start geth
sudo tee /etc/systemd/system/lighthouse-bn.service >/dev/null <<EOF
[Unit]
Description=Lighthouse beacon node
After=network-online.target
[Service]
User=$USER
ExecStart=/usr/local/bin/lighthouse bn --network mainnet --http --http-address 127.0.0.1 --execution-endpoints http://127.0.0.1:8551
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable lighthouse-bn
sudo systemctl start lighthouse-bn
# Current block:
curl -s -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' -H "Content-Type: application/json" http://127.0.0.1:8545
# Balance (replace ADDRESS):
curl -s -X POST --data '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0xADDRESS","latest"],"id":1}' -H "Content-Type: application/json" http://127.0.0.1:8545
Simple JSON-RPC queries against a local/exposed execution RPC. Use an authenticated gateway or restrict network access in production.
# Geth status & logs:
sudo systemctl status geth
sudo journalctl -u geth -f
# Lighthouse status & logs:
sudo systemctl status lighthouse-bn
sudo journalctl -u lighthouse-bn -f
Monitor both execution and consensus clients when troubleshooting validator issues or sync problems. Check resource usage and peer counts.
Operational and diagnostic commands for Story nodes, tooling and analytics pipelines.
# Check installed version
storyd version
# Basic node status
storyd status
Confirms that the Story binary is installed correctly and the node is reachable. Useful as a first sanity check after upgrades or restarts.
# Tendermint sync status
curl -s localhost:26657/status | jq .result.sync_info
# Current block height
curl -s localhost:26657/status | jq .result.sync_info.latest_block_height
Helps operators detect stalls, slow sync, or peer-related issues before they impact uptime or downstream services.
# Clone Story analytics / scanner tooling
git clone https://github.com/your-org/your-story-tool.git
cd your-story-tool
# Install deps / run
make install
make run
Example workflow for Story-focused analytics and observability tools. Typically used to inspect IP activity, transaction patterns, and higher-level network behavior.
# Service status
sudo systemctl status storyd
# Live logs
sudo journalctl -u storyd -f
Core operational commands for debugging crashes, upgrade issues, and runtime anomalies on Story validator or full nodes.
Quick operational recipes for Monad nodes: RPC health, sync checks, peer visibility, logs, and service control.
Set your local/remote RPC endpoint and run these copy-and-check commands to validate liveness and sync.
MONAD_TESTNET_RPC="http://127.0.0.1:8545"
# Chain ID
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"eth_chainId","params":[]}' \
"$MONAD_TESTNET_RPC" | jq
# Latest block
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"eth_blockNumber","params":[]}' \
"$MONAD_TESTNET_RPC" | jq
# Sync status
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"eth_syncing","params":[]}' \
"$MONAD_TESTNET_RPC" | jq
# Peer count
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"net_peerCount","params":[]}' \
"$MONAD_TESTNET_RPC" | jq
Minimal RPC pack to confirm your Monad testnet node is alive: chain identity, block progress, syncing mode,
and peer connectivity. If eth_blockNumber stalls while peers are low, investigate networking and logs.
MONAD_TESTNET_RPC with your node RPC (local or reverse-proxied).eth_syncing can be false on fully synced nodes.net_peerCount returns hex — compare trends rather than single values.MONAD_TESTNET_UNIT="monad-testnet"
# Status
sudo systemctl status ${MONAD_TESTNET_UNIT}
# Live logs
sudo journalctl -u ${MONAD_TESTNET_UNIT} -f
# Restart
sudo systemctl restart ${MONAD_TESTNET_UNIT}
Service-first ops loop for Monad testnet: check status → tail logs → restart when necessary. Keep testnet units separate from mainnet to avoid accidental restarts.
Tip: if your unit name differs (e.g. monad.service), just change MONAD_TESTNET_UNIT.
Same core checks, but treat mainnet as production: restrict RPC exposure and keep safe restart policies.
MONAD_MAINNET_RPC="http://127.0.0.1:8545"
# Chain ID
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"eth_chainId","params":[]}' \
"$MONAD_MAINNET_RPC" | jq
# Latest block
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"eth_blockNumber","params":[]}' \
"$MONAD_MAINNET_RPC" | jq
# Sync status
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"eth_syncing","params":[]}' \
"$MONAD_MAINNET_RPC" | jq
# Node client/version (optional)
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","id":1,"method":"web3_clientVersion","params":[]}' \
"$MONAD_MAINNET_RPC" | jq
On mainnet, these checks double as a fast incident triage: confirm the node’s chain identity and whether block height is progressing. Prefer local RPC (127.0.0.1) behind a reverse proxy + auth.
web3_clientVersion is useful to confirm you’re on expected release.MONAD_MAINNET_UNIT="monad"
sudo systemctl edit ${MONAD_MAINNET_UNIT} <<EOF
[Service]
Restart=on-failure
RestartSec=5
StartLimitIntervalSec=600
StartLimitBurst=5
EOF
sudo systemctl daemon-reload
sudo systemctl restart ${MONAD_MAINNET_UNIT}
sudo systemctl status ${MONAD_MAINNET_UNIT}
Hardened restart policy reduces downtime from transient failures while preventing infinite restart loops.
Adjust the unit name to match your deployment (e.g. monad.service, monad-node).
Combine with external monitoring (Prometheus/Grafana/Alertmanager) so you get alerted when restarts occur frequently.