config.json – the main configuration file that defines ADNL settings, database paths, garbage collection, collator behavior, and optional control, liteserver, or JSON-RPC endpoints.
In the Helm chart, per-node configuration files are provided through the nodeConfigs map in values.yaml or through an existing Secret. Keys must follow the node-N.json naming pattern, where N matches the 0-based StatefulSet replica index.
During pod initialization, the init container copies node-<pod-index>.json to /main/config.json.
Node config overview
Helm integration constraints
Several fields in the node config must be consistent with Helm values:| Field | Requirement |
|---|---|
adnl_node.ip_address | Must match the external IP assigned to this replica’s LoadBalancer service. The port must match ports.adnl. |
control_server.address | The port must match ports.control if the control server is enabled. |
lite_server.address | The port must match ports.liteserver if the liteserver is enabled. |
json_rpc_server.address | The port must match ports.jsonRpc if JSON-RPC is enabled. |
metrics.address | The port must match ports.metrics if metrics are enabled. |
log_config_name | Must be /main/logs.config.yml, where the chart mounts the logs config. |
ton_global_config_name | Must be /main/global.config.json, where the chart mounts the global config. |
internal_db_path | Must be /db, where the chart mounts the database PVC. |
How to provide configs
Choose one of the following options:-
Use
--set-file: -
Define inline in
values.yaml: -
Reference an existing Secret:
Full node with liteserver config example
A typical full node exposeslite_server for lite-client queries and json_rpc_server for the HTTP API. The control_server is optional.
The example shows recommended values for production deployments. These values may differ from the defaults listed in the field reference. If a field is not specified, the node uses the default value defined in the codebase.
Validator node config example
A validator requirescontrol_server for key management and election participation. Liteserver and JSON-RPC endpoints are not required on a validator and should be deployed separately for security reasons. The collator_config defines block production parameters.
The example shows recommended values for production deployments. These values may differ from the defaults listed in the field reference. If a field is not specified, the node uses the default value defined in the codebase.
How to generate keys
Each node requires multiple Ed25519 key pairs. The config references them as base64-encoded 32-byte private keys. A separate key pair is required for each purpose: DHT, overlay, liteserver, control server, and control client. All keys in the config use the following structure:type_id value 1209251014 corresponds to Ed25519, which is the only supported key type. Public keys, such as in control_server.clients, use the same structure but specify pub_key instead of pvt_key.
To generate:
Generate private key
Generate a raw 32-byte Ed25519 private key using OpenSSL:This command outputs a base64 string such as
GnEN3s5t2Z3W1e...==. Use this value as pvt_key.Derive public key
Derive the corresponding public key from the private key:Use the public key for
control_server.clients and for publishing the liteserver key in the global config.Repeat the process
Repeat the process for each required key pair.A typical full node with liteserver requires 3 key pairs:
A validator additionally requires:
| Key | Used in | Field |
|---|---|---|
| DHT private key | adnl_node.keys[0] (tag 1) | pvt_key |
| Overlay private key | adnl_node.keys[1] (tag 2) | pvt_key |
| Liteserver private key | lite_server.server_key | pvt_key |
| Key | Used in | Field |
|---|---|---|
| Control server private key | control_server.server_key | pvt_key |
| Control client key pair | control_server.clients.list[0] | pub_key (public part only) |
Quick generation script
Generate all keys at once for a full node with liteserver:control-server and control-client to the loop.
Archival node
By default, the node prunes old block archives and state snapshots through thegc section. To preserve the full blockchain history, override the gc settings in the node configuration:
enable_for_archives: falsestops pruning of block archives. When turned off,archives_life_time_hoursis ignored.enable_for_shard_state_persistent: falsestopsgcfrom pruning persistent state snapshots. Withgcenabled, older states are retained at decreasing frequency. Turning it off preserves all snapshots and increases disk usage.skip_saving_persistent_states: falseensures persistent snapshots are created. If set totrue, snapshots are never saved regardless ofgcsettings.- Do not turn
cells_gc_configoff. Cellsgcremoves unreferenced cells only and does not delete blocks or states. Disabling it leads to database storage leaks.
storage.db.size is large enough.
Networking
A TON node needs a stable, publicly reachable IP address. Other nodes connect to theadnl_node.ip_address specified in the node config.
Ports and services
The chart exposes five ports. Each port is optional. Set it tonull to omit it. ports.adnl is an exception and is always enabled.
| Port | Protocol | Default | Purpose |
|---|---|---|---|
ports.adnl | UDP | 30303 | Peer-to-peer protocol. Must be publicly reachable. |
ports.control | TCP | 50000 | Node management; stop, restart, elections. Should remain internal. |
ports.liteserver | TCP | null | Liteserver API for external consumers. |
ports.jsonRpc | TCP | null | JSON-RPC API for external consumers. |
ports.metrics | TCP | null | Prometheus metrics, health and readiness probes. |
Per-port services
Each enabled port gets its own Kubernetes Service per replica. This allows independent configuration of the service type, annotations, and traffic policy for each port.| Port | Service name | Default type | Rationale |
|---|---|---|---|
| ADNL | <release>-<i> | LoadBalancer | Must be publicly reachable for peer-to-peer communication. |
| control | <release>-<i>-control | ClusterIP | Used for node management. Keep internal. |
| liteserver | <release>-<i>-liteserver | LoadBalancer | Serves external API consumers. |
| jsonRpc | <release>-<i>-jsonrpc | LoadBalancer | Serves external API consumers. |
| metrics | <release>-metrics | ClusterIP | Used for internal scraping only. Not per-replica, implemented through a separate template. |
type, externalTrafficPolicy, annotations, and perReplica overrides.
Exposure modes
The chart supports five exposure modes. Exposure modes control how traffic reaches the pod, not which ports are enabled. Choose one mode per deployment. Modes can be combined, but this is uncommon.LoadBalancer (recommended)
Each per-replica Service provisions a cloud load balancer or a MetalLB VIP. The external IP is assigned through provider-specific annotations on the ADNL service.Static IP assignment
UseperReplica annotations to pin IPs. List index matches replica index.
-
MetalLB:
-
AWS Elastic IP:
-
GCP:
adnl_node.ip_address in the node config must match the external IP assigned to that replica’s ADNL service.
NodePort
Uses the Kubernetes NodePort mechanism. Traffic arrives at<node-ip>:<nodePort> and is forwarded to the pod.
- Works on any cluster. No load balancer infrastructure required.
- Port conflicts must be managed manually. Multiple replicas require different ports; NodePorts default range: 30000–32767.
- The
adnl_node.ip_addressmust be the node’s external IP with the NodePort, not the container port. - The pod must run on the node whose IP is configured. Enforce this using
nodeSelectorornodeAffinity.
hostPort
Binds selected container ports directly to the host network interface. The pod remains in the pod network. Only the selected ports are exposed on the host IP. Network policies continue to work. Each port can be enabled independently:- Only the enabled ports are exposed on the host. Other ports remain in the pod network behind Services.
- Network policies still work, unlike
hostNetwork. - One pod per node. The port binds to
0.0.0.0on the host. Two pods on the same node would conflict. UsepodAntiAffinityornodeSelectorto spread replicas. adnl_node.ip_addressmust match the host’s external IP.
hostNetwork
The pod uses the host network stack directly. All container ports bind directly to the host IP. No NAT or Service abstraction is required. The pod itself becomes the network endpoint.- Zero NAT overhead.
- All ports are exposed on the host, including control. Restrict access using firewall rules.
- Network policies do not work. The pod runs in the host network namespace.
- One pod per node. Same scheduling constraint as
hostPort. adnl_node.ip_addressmust match the host’s external IP.- Services are still created. Set
services.adnl.type: ClusterIPif a LoadBalancer is not required.
Ingress-nginx stream proxy
Reuses an existing ingress-nginx controller to forward raw TCP and UDP streams to the node’s ClusterIP Services. No chart changes are required. Configuration is external. Override the service type to ClusterIP for the ports routed through the ingress:hostPort.adnl: true.
Use this mode when ingress-nginx is already deployed, and additional LoadBalancers for liteserver or JSON-RPC are not desired. ADNL still requires a direct path; LoadBalancer or hostPort.
Trade-offs:
- Reuses existing infrastructure. No additional load balancer cost.
- Adds an extra proxy hop; ingress-nginx sits between the client and the node.
adnl_node.ip_addressmust be the ingress controller’s external IP.- Configuration is external. The ingress-nginx
tcp-servicesandudp-servicesConfigMaps must be managed separately.
Comparison
| Mode | NAT overhead | LB required | Port management | Network policies | Complexity |
|---|---|---|---|---|---|
| LoadBalancer | DNAT | yes (cloud LB / MetalLB) | automatic | yes | low |
| NodePort | kube-proxy | no | manual (port ranges) | yes | medium |
| hostPort | minimal | no | manual (one pod per node) | yes | medium |
| hostNetwork | none | no | manual (one pod per node) | no | medium |
| Ingress-nginx stream | proxy hop | no (reuses ingress) | manual (ConfigMaps) | yes | medium |
- Use
hostPortin bare-metal with direct public IPs when MetalLB is not available. - Use
hostNetworkonly when zero NAT overhead is critical, and the security trade-offs of exposing all ports are acceptable.
NetworkPolicy
The chart can create a NetworkPolicy with per-port ingress rules.networkPolicy.enabled is true:
- ADNL (UDP) ingress is always created. If
networkPolicy.adnl.allowFromis empty, the default source is0.0.0.0/0. - TCP rules are configured independently per port:
control,liteserver,jsonRpc,metrics. - A TCP rule is created only when both conditions are true:
- the corresponding port is enabled in
ports.*; networkPolicy.<port>.enabled: true.networkPolicy.<port>.allowFromaccepts raw Kubernetesfromentries. If omitted or empty, source is not restricted for that rule.networkPolicy.extraIngressappends additional raw ingress rules.
0.0.0.0/0 must be allowed separately for ADNL.
NetworkPolicy has no effect when hostNetwork: true, because the pod runs in the host network namespace.