Skip to main content

Image support

The chart default image is ghcr.io/rsquad/ton-rust-node/node:v0.2.1-mainnet.
  • <RELEASE_NAME>: Helm release name, for example my-validator.
  • <VALUES_FILE>: path to a Helm values file, for example values.yaml.

Node roles

The chart deploys the same TON Rust node binary in two operational roles: validator and full node.
RolePurposePorts to expose
ValidatorParticipates in consensus and validator elections.Keep liteserver and jsonRpc disabled; expose only required node and ops ports (adnl, and control if needed).
Full nodeSyncs chain and serves external clients (APIs, explorers, bots).Enable liteserver, jsonRpc, or both when external access is required.
  • Run validators and full nodes as separate Helm releases so resources, security policy, and lifecycle stay isolated.
  • If full chain history is needed, enable archival mode as described in Archival node settings.

Quick start

Prerequisites

  • Kubernetes cluster access configured for helm.
  • Helm 3 installed.
  • Access to the local chart at ./helm/ton-rust-node by cloning the ton-rust-node repository.
  • A values file for the release, for example values.yaml.
Install and deploy TON Rust node with Helm using a minimal configuration, then optionally enable liteserver and JSON Remote Procedure Call (JSON-RPC) ports. To deploy a validator, use this page for base node deployment and keep liteserver and JSON-RPC ports disabled. For validator election and operations workflow, use the validator guide (nodectl).

1. Prepare a values file

Not runnable
values.yaml

replicas: 2

services:
  adnl:
    perReplica:
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "1.2.3.4"
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "5.6.7.8"

nodeConfigs:
  node-0.json: |
    { "log_config_name": "/main/logs.config.yml", ... }
  node-1.json: |
    { "log_config_name": "/main/logs.config.yml", ... }
The chart includes a mainnet globalConfig and a default logsConfig. This minimal setup requires only nodeConfigs. Per-replica service annotations are optional and shown here for static IP assignment. The example uses metallb.universe.tf/loadBalancerIPs annotations. Other networking modes are described in the Networking section, including NodePort, hostPort, hostNetwork, and ingress controllers such as ingress-nginx (Nginx).

2. Install the release

All helm commands below require Helm to be installed and available in PATH. See Install Helm. Use the local chart from ton-rust-node/helm/ton-rust-node:
helm install <RELEASE_NAME> ./helm/ton-rust-node -f <VALUES_FILE>
Or install from an Open Container Initiative registry:
helm install <RELEASE_NAME> oci://ghcr.io/rsquad/ton-rust-node/helm/node -f <VALUES_FILE>

Verify deployment

Check pod status for the release:
kubectl get pods -l app.kubernetes.io/name=node,app.kubernetes.io/instance=<RELEASE_NAME>
Check service status for the release:
kubectl get svc -l app.kubernetes.io/name=node,app.kubernetes.io/instance=<RELEASE_NAME>

Enable liteserver and JSON-RPC ports

Use this only for full node deployments. Do not expose these ports on validators. Not runnable

replicas: 2

ports:
  liteserver: 40000
  jsonRpc: 8081

services:
  adnl:
    perReplica:
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "10.0.0.1"
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "10.0.0.2"

nodeConfigs:
  node-0.json: |
    { "log_config_name": "/main/logs.config.yml", ... }
  node-1.json: |
    { "log_config_name": "/main/logs.config.yml", ... }

Run multiple releases in the same namespace

Use different release names:
helm install validator ./helm/ton-rust-node -f validator-values.yaml
helm install fullnode ./helm/ton-rust-node -f fullnode-values.yaml
This creates separate StatefulSets (validator, fullnode), services (validator-0, fullnode-0), and configs.

Useful commands

# Check pod status (replace "my-node" with the release name)
kubectl get pods -l app.kubernetes.io/name=node,app.kubernetes.io/instance=my-node

# Get external service IPs
kubectl get svc -l app.kubernetes.io/name=node,app.kubernetes.io/instance=my-node

# View logs
kubectl logs my-node-0 -c ton-node

# Exec into pod
kubectl exec -it my-node-0 -c ton-node -- /bin/sh