AstroSynx TECH

A practical hub for operators. We’ve collected the most useful Celestia and Ethereum node commands, systemd snippets, and quick links to public API/RPC endpoints. Copy, paste, and ship.

Mainnet Beta & Mocha Testnet • Bridge / Light Nodes • Keys • Validators • Service control

Celestia Endpoints • API & RPC

Public endpoints curated from provider directories and official docs. Use your own/private endpoints in production.

Celestia Snapshots • Fast Restore
Snapshot Bridge

Bridge

Full node snapshot optimized for bridge nodes to seed EDS data quickly.

https://archive-snap.astrosynx.com/celestia-mainnet/snap_archive.tar.lz4
Snapshot Consensus

Consensus

Snapshot of consensus state for fast recovery of consensus nodes.

https://archive-snap.astrosynx.com/celestia-mainnet/snap_archive.tar.lz4
Snapshot Consensus (pruned)

Consensus (pruned)

Smaller snapshot with pruned history for nodes that prefer reduced disk usage.

https://mainnet-snap.astrosynx.com/celestia/snap_mainnet.tar.lz4

Why snapshots matter for Celestia bridge nodes

Snapshots let operators restore nodes quickly by providing a consistent state archive. For bridge nodes this is particularly useful to seed EDS (erasure data shares) and resume serving data without replaying the entire history.

Using snapshots reduces recovery time, lowers bandwidth and disk churn during resync, and helps teams recover from disk failures or migration tasks with predictable steps.

Suggested flow: stop the service → replace data folder with snapshot contents → start service and monitor logs.

Polkadot • Endpoints & Snapshots

Public mainnet RPC + curated snapshots to bootstrap archive or pruned nodes fast.

RPC Polkadot Mainnet

HTTPS RPC endpoint

Use this endpoint for light integrations, monitoring and quick diagnostics. Prefer private peers for production validator setups.

https://polkadot-mainnet-rpc.astrosynx.com/
Snapshots Polkadot Mainnet

Pruned & Archive

Choose a pruned snapshot for validator / RPC nodes, or archive if you need full historical state for analytics.

Always verify chain height and hash after restore before exposing RPC to users.

Infrastructure • Machine configuration

Data center footprint & hardware

Our validator and node infrastructure is hosted in managed facilities in Malaysia. We choose locations that provide robust power, low-latency network connectivity across APAC, and strong physical security controls.

Base server configuration used across our fleet (standardized to simplify maintenance and monitoring):

CPU
Ryzen 9 7950X3D
(16C / 32T)
Memory
128 GB DDR5
Storage
8 TB NVMe

Additional fleet-level choices:

  • Redundant networking (dual upstream providers, BGP where available).
  • Automated monitoring with alerting, remote KVM access and out-of-band management.
  • Regular firmware and BIOS tuning for latency and stability; conservative power and thermal profiles for 24/7 operation.
Primary location
Malaysia — APAC
Optimized for:
Validator operations • Full nodes • Snapshot hosts
Recovery Scripts • Polkadot

Self-healing node recipes

Simple bash helpers to detect stuck heights, peer loss and auto-restart your Polkadot node.

Detect stuck height • auto-restart
#!/usr/bin/env bash
NODE_RPC="http://127.0.0.1:9933"
STATE_FILE="/tmp/polkadot_height"
THRESHOLD_MIN=10

current_height() {
  curl -s -H "Content-Type: application/json" \
    -d '{"jsonrpc":"2.0","id":1,"method":"chain_getHeader","params":[]}' \
    "$NODE_RPC" | jq -r ".result.number" | sed "s/^0x//" | xargs printf "%d\n"
}

now_ts=$(date +%s)
height=$(current_height)
[ -z "$height" ] && echo "No height from RPC" && exit 1

if [ -f "$STATE_FILE" ]; then
  last_height=$(cut -d" " -f1 "$STATE_FILE")
  last_ts=$(cut -d" " -f2 "$STATE_FILE")
  delta_min=$(( (now_ts - last_ts) / 60 ))

  if [ "$height" -le "$last_height" ] && [ "$delta_min" -ge "$THRESHOLD_MIN" ]; then
    echo "Height stuck at $height for $delta_min min — restarting polkadot.service"
    sudo systemctl restart polkadot.service
  fi
fi

echo "$height $now_ts" > "$STATE_FILE"

How to use

Script polls local RPC, stores latest height and timestamp, and if the height doesn’t move for THRESHOLD_MIN minutes, it restarts polkadot.service.

  • Save as /usr/local/bin/polkadot-watch-height.sh & make executable.
  • Schedule via cron (e.g. every 5 minutes) for lightweight self-healing.
  • Adjust NODE_RPC and threshold to your environment.
Detect peer loss • auto-restart
#!/usr/bin/env bash
NODE_RPC="http://127.0.0.1:9933"
MIN_PEERS=10

peers=$(curl -s -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"system_health","params":[]}' \
  "$NODE_RPC" | jq -r ".result.peers")

[ -z "$peers" ] && echo "No data from system_health" && exit 1

echo "Current peers: $peers"
if [ "$peers" -lt "$MIN_PEERS" ]; then
  echo "Peers below threshold ($MIN_PEERS) — restarting polkadot.service"
  sudo systemctl restart polkadot.service
fi

When peers collapse

Polls system_health and restarts the node when connected peers drop below MIN_PEERS. Useful when networking occasionally degrades or upstream routing flaps.

Combine with external monitoring (Prometheus / Grafana / Alertmanager) so you also get alerts when auto-restarts happen too often.

SystemD • harden auto-restart
sudo systemctl edit polkadot.service <<EOF
[Service]
Restart=on-failure
RestartSec=5
StartLimitIntervalSec=600
StartLimitBurst=5
EOF

sudo systemctl daemon-reload
sudo systemctl restart polkadot.service

Tighten the service loop

Systemd-level tuning that ensures your node comes back automatically on crashes, while StartLimit* guards against infinite restart loops.

Pair this with the height/peers watchdog scripts for a pragmatic, production-friendly self-healing setup.

Open RPC Cluster • Diagnostics

Open Polkadot RPC / WS for verification

Use these endpoints to cross-check heights, peers and state against your own nodes when debugging incidents.

Cluster HTTPS RPC

Load-balanced endpoint

Point tooling and one-off checks to the open cluster instead of your validator nodes during investigations.

https://polkadot-mainnet-rpc.astrosynx.com/
Cluster WebSocket

Subscriptions / tracing

Ideal for subscribing to finality, heads and health streams from a neutral reference node set.

wss://polkadot-mainnet-rpc.astrosynx.com/
Playbook How to use

Compare & verify

  • Compare chain_getHeader height vs your node when suspecting stalls.
  • Check system_health peers to distinguish local vs network-wide issues.
  • Use as a baseline when restoring from snapshots or migrating hardware.
Node Commands

Copy-and-Run Recipes

Bridge & Light nodes • Keys • Validators • SystemD

Celestia commands

Core Celestia recipes for light & bridge nodes, snapshots and common maintenance tasks.

Light Node • init
celestia light init
# Mocha testnet:
celestia light init --p2p.network mocha

What it does

Initializes a Light DA node store and config. Add --p2p.network mocha for testnet. After init, start the node pointing to a validator gRPC endpoint (typically port 9090).

Start (Mainnet/Testnet):
celestia light start --core.ip rpc.celestia.pops.one --core.port 9090
celestia light start --core.ip rpc-mocha.pops.one --core.port 9090 --p2p.network mocha
Bridge Node • init & start
# Init:
celestia bridge init --core.ip <RPC_HOST> --core.port 9090
# Start (Mocha):
celestia bridge start --core.ip rpc-mocha.pops.one --core.port 9090 --p2p.network mocha

Why

The bridge node connects consensus (celestia-app) to the DA network and serves EDS shares to light nodes. It needs a trusted gRPC endpoint and P2P ports (2121 TCP/UDP).

Tip: ensure gRPC is enabled in app.toml on your consensus node.

Keys • cel-key
# Build cel-key:
make cel-key
# Add key (Light/Mocha):
./cel-key add my_celes_key --keyring-backend test --node.type light --p2p.network mocha
# List keys:
./cel-key list --node.type light --keyring-backend test --p2p.network mocha

What it does

cel-key manages keys used by celestia-node. Keep exported keys/passwords secure. Active address available via celestia state account-address.

Snapshot • celestia-app
sudo systemctl stop celestia-appd
rm -rf ~/.celestia/data
curl -L <SNAP_URL> | lz4 -dc | tar -x -C ~/.celestia
sudo systemctl start celestia-appd
journalctl -u celestia-appd -f

Why

Fast recovery of celestia-appd from snapshot. Replace <SNAP_URL> with provider snapshot link.

SystemD • bridge
sudo tee /etc/systemd/system/celestia-bridge.service >/dev/null <<EOF
[Unit]
Description=celestia-bridge
After=network-online.target

[Service]
User=$USER
ExecStart=$(which celestia) bridge start
Restart=on-failure
RestartSec=3
LimitNOFILE=1400000

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable celestia-bridge
sudo systemctl start celestia-bridge && sudo journalctl -u celestia-bridge.service -f

What it does

Runs your bridge node as a background service with restart policy and increased file descriptor limit. Reuse structure for other celestia services.

Health & Logs
# service status & logs (bridge):
sudo systemctl status celestia-bridge
sudo journalctl -u celestia-bridge.service -f

# Tendermint sync info (consensus RPC):
curl -s localhost:26657/status | jq .result.sync_info

Why

Minimal checks to verify liveness and sync. Replace host/ports to match your setup when using remote endpoints or containers.

Ethereum commands

Common recipes for execution clients (geth), consensus clients (Lighthouse), validator management and quick RPC checks.

Execution • Geth (quick start)
# Download and run geth (example):
wget https://gethstore.blob.core.windows.net/builds/geth-linux-amd64-1.13.5-...tar.gz
tar -xzf geth-*.tar.gz
sudo mv geth*/geth /usr/local/bin/geth
geth --http --http.addr 0.0.0.0 --http.port 8545 --http.api eth,net,web3,txpool --syncmode snap --datadir /var/lib/geth

Notes

Use the official geth release URL for your version/architecture. --syncmode snap uses snapshot sync for fast bootstrap. Expose RPC only behind firewalls or reverse proxies with authentication.

Consensus • Lighthouse
# Install Lighthouse (example):
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/sigp/lighthouse.git && cd lighthouse
make
# Run beacon node (mainnet):
lighthouse bn --network mainnet --http --http-address 0.0.0.0 --execution-endpoints http://127.0.0.1:8551

Why Lighthouse?

Lighthouse is a popular Rust-based consensus client. Configure the execution endpoint (geth or other) and enable the HTTP API for validator clients. Use the --datadir flag to control storage location.

Validator • Lighthouse (vc)
# Start validator client:
lighthouse vc --network mainnet --beacon-node http://127.0.0.1:5052 --datadir /var/lib/lighthouse/validator

Validator keys

Key generation / deposits are outside this snippet — use the official deposit-cli or recommended tooling. After deposit, place your keystores and enable the validator client to sign duties.

SystemD • geth
sudo tee /etc/systemd/system/geth.service >/dev/null <<EOF
[Unit]
Description=Geth Ethereum node
After=network-online.target

[Service]
User=$USER
ExecStart=/usr/local/bin/geth --http --http.addr 127.0.0.1 --http.port 8545 --http.api eth,net,web3 --syncmode snap --datadir /var/lib/geth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable geth
sudo systemctl start geth
SystemD • lighthouse (bn)
sudo tee /etc/systemd/system/lighthouse-bn.service >/dev/null <<EOF
[Unit]
Description=Lighthouse beacon node
After=network-online.target

[Service]
User=$USER
ExecStart=/usr/local/bin/lighthouse bn --network mainnet --http --http-address 127.0.0.1 --execution-endpoints http://127.0.0.1:8551
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable lighthouse-bn
sudo systemctl start lighthouse-bn
Quick RPC • eth
# Current block:
curl -s -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' -H "Content-Type: application/json" http://127.0.0.1:8545

# Balance (replace ADDRESS):
curl -s -X POST --data '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0xADDRESS","latest"],"id":1}' -H "Content-Type: application/json" http://127.0.0.1:8545

Why

Simple JSON-RPC queries against a local/exposed execution RPC. Use an authenticated gateway or restrict network access in production.

Health • logs (geth & lighthouse)
# Geth status & logs:
sudo systemctl status geth
sudo journalctl -u geth -f

# Lighthouse status & logs:
sudo systemctl status lighthouse-bn
sudo journalctl -u lighthouse-bn -f

Why

Monitor both execution and consensus clients when troubleshooting validator issues or sync problems. Check resource usage and peer counts.

Story Protocol

Story commands

Operational and diagnostic commands for Story nodes, tooling and analytics pipelines.

Story • install & version
# Check installed version
storyd version

# Basic node status
storyd status

What it does

Confirms that the Story binary is installed correctly and the node is reachable. Useful as a first sanity check after upgrades or restarts.

Story • chain & sync
# Tendermint sync status
curl -s localhost:26657/status | jq .result.sync_info

# Current block height
curl -s localhost:26657/status | jq .result.sync_info.latest_block_height

Why

Helps operators detect stalls, slow sync, or peer-related issues before they impact uptime or downstream services.

Story • analytics tooling
# Clone Story analytics / scanner tooling
git clone https://github.com/your-org/your-story-tool.git
cd your-story-tool

# Install deps / run
make install
make run

Context

Example workflow for Story-focused analytics and observability tools. Typically used to inspect IP activity, transaction patterns, and higher-level network behavior.

Story • logs & service
# Service status
sudo systemctl status storyd

# Live logs
sudo journalctl -u storyd -f

Operational use

Core operational commands for debugging crashes, upgrade issues, and runtime anomalies on Story validator or full nodes.

Monad

Monad commands

Quick operational recipes for Monad nodes: RPC health, sync checks, peer visibility, logs, and service control.

Monad Testnet

Set your local/remote RPC endpoint and run these copy-and-check commands to validate liveness and sync.

Monad Testnet • RPC sanity checks
MONAD_TESTNET_RPC="http://127.0.0.1:8545"

# Chain ID
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"eth_chainId","params":[]}' \
  "$MONAD_TESTNET_RPC" | jq

# Latest block
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"eth_blockNumber","params":[]}' \
  "$MONAD_TESTNET_RPC" | jq

# Sync status
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"eth_syncing","params":[]}' \
  "$MONAD_TESTNET_RPC" | jq

# Peer count
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"net_peerCount","params":[]}' \
  "$MONAD_TESTNET_RPC" | jq

What it checks

Minimal RPC pack to confirm your Monad testnet node is alive: chain identity, block progress, syncing mode, and peer connectivity. If eth_blockNumber stalls while peers are low, investigate networking and logs.

  • Replace MONAD_TESTNET_RPC with your node RPC (local or reverse-proxied).
  • eth_syncing can be false on fully synced nodes.
  • net_peerCount returns hex — compare trends rather than single values.
Monad Testnet • systemd & logs
MONAD_TESTNET_UNIT="monad-testnet"

# Status
sudo systemctl status ${MONAD_TESTNET_UNIT}

# Live logs
sudo journalctl -u ${MONAD_TESTNET_UNIT} -f

# Restart
sudo systemctl restart ${MONAD_TESTNET_UNIT}

Operator flow

Service-first ops loop for Monad testnet: check status → tail logs → restart when necessary. Keep testnet units separate from mainnet to avoid accidental restarts.

Tip: if your unit name differs (e.g. monad.service), just change MONAD_TESTNET_UNIT.

Monad Mainnet

Same core checks, but treat mainnet as production: restrict RPC exposure and keep safe restart policies.

Monad Mainnet • RPC sanity checks
MONAD_MAINNET_RPC="http://127.0.0.1:8545"

# Chain ID
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"eth_chainId","params":[]}' \
  "$MONAD_MAINNET_RPC" | jq

# Latest block
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"eth_blockNumber","params":[]}' \
  "$MONAD_MAINNET_RPC" | jq

# Sync status
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"eth_syncing","params":[]}' \
  "$MONAD_MAINNET_RPC" | jq

# Node client/version (optional)
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","id":1,"method":"web3_clientVersion","params":[]}' \
  "$MONAD_MAINNET_RPC" | jq

Mainnet notes

On mainnet, these checks double as a fast incident triage: confirm the node’s chain identity and whether block height is progressing. Prefer local RPC (127.0.0.1) behind a reverse proxy + auth.

  • Keep RPC private; expose only via authenticated gateways.
  • Track height deltas over time (stalls are more important than absolute values).
  • web3_clientVersion is useful to confirm you’re on expected release.
Monad Mainnet • systemd restart policy
MONAD_MAINNET_UNIT="monad"

sudo systemctl edit ${MONAD_MAINNET_UNIT} <<EOF
[Service]
Restart=on-failure
RestartSec=5
StartLimitIntervalSec=600
StartLimitBurst=5
EOF

sudo systemctl daemon-reload
sudo systemctl restart ${MONAD_MAINNET_UNIT}
sudo systemctl status ${MONAD_MAINNET_UNIT}

Why this matters

Hardened restart policy reduces downtime from transient failures while preventing infinite restart loops. Adjust the unit name to match your deployment (e.g. monad.service, monad-node).

Combine with external monitoring (Prometheus/Grafana/Alertmanager) so you get alerted when restarts occur frequently.