log4rs framework for logging.
Logging is configured using a YAML file specified in the log_config_name field of the node configuration. In the Helm chart, this file is mounted at /main/logs.config.yml.
A default configuration is bundled with the chart at files/logs.config.yml and is used if no custom configuration is provided. It can be overridden in one of the following ways:
- inline in
values.yaml; - from local file
--set-file logsConfig=path; - by referencing an existing ConfigMap.
Hot reload
Therefresh_rate field instructs log4rs to periodically re-read the configuration file. This allows log levels to be changed without restarting the node – updates are applied within the specified interval.
seconds, minutes, hours. If the field is omitted, the config is read only once at startup.
This feature can be used during production debugging: temporarily increase a logger’s level to debug, observe the output, then restore the original level without restarting the node.
Appenders
Appenders define where logs are written. Each appender has a unique name, the YAML key, and akind. Three kinds are supported: rolling_file, console, and file.
A TON node can generate a large volume of logs, especially during synchronization, elections, and catch-up. Appender configuration and log levels should be selected accordingly.
rolling_file
The rolling_file appender is the default and recommended option for production. It writes logs to a file with automatic size-based rotation.
The chart creates a dedicated logs PersistentVolumeClaim for this appender, ensuring logs remain available locally. Rotation prevents uncontrolled disk usage.
-
The
policysection defines when and how rotation occurs. -
Trigger:
sizeRotates the log file when it reaches the configured size.Field Description limitMaximum file size. Supported suffixes: b,kb,mb,gb,tb; e.g.25 gb. -
Roller:
fixed_windowRenames archived files using a pattern with a sliding index.Example configuration:Field Required Description patternyes Archive filename template. {}is replaced by the index. Append.gzto compress archives.baseno; default 0Starting index countyes Maximum number of archive files pattern: "/logs/output_{}.log"base: 1count: 4
output.logis renamed tooutput_1.logoutput_1.log→output_2.logoutput_2.log→output_3.logoutput_3.log→output_4.log- The previous
output_4.logis deleted
.gzto the pattern to enable compression of archived logs:
storage.logs.size defines the size of the PVC mounted at /logs. Rotation settings must fit within this limit. Example of default configuration:
limit: 25 gbcount: 4
storage.logs.size is 150Gi (~161 GB), providing headroom. If rotation limits are reduced, for example, 1 GB × 10 archives with .gz compression, actual disk usage is lower, allowing a smaller volume size.
console
The console appender writes logs to stdout or stderr. It is suitable when the cluster uses a log collection stack such as Loki, Fluentd, or Elasticsearch, and log storage is handled externally.
At debug or trace levels, log volume can be high and may overload the collector. Log levels should be configured accordingly. When using console-only logging, disable the logs volume by setting storage.logs.enabled to false.
file
The file appender writes logs to a file without rotation. The file grows indefinitely and may exhaust disk space. Use rolling_file instead.
filters
filters may be attached to any appender for additional message filtering. Here, threshold filter discards messages below the specified level.
Encoder (log format)
Each appender uses anencoder to format log entries. The default encoder kind is pattern:
Format specifiers
| Specifier | Name | Description |
|---|---|---|
{d} / {d(fmt)} | date | Timestamp. Default format is ISO 8601. Custom format uses chrono syntax: {d(%Y-%m-%d %H:%M:%S.%f)}. |
{l} | level | Log levels: error, warn, info, debug, trace |
{m} | message | Log message body |
{n} | newline | Platform-dependent newline |
{t} | target | Logger target; module name or explicit target: in the log macro. |
{I} | thread_id | Numeric thread ID |
{T} | thread | Thread name |
{f} | file | Source file name |
{L} | line | Source line number |
{M} | module | Module path |
{P} | pid | Process ID |
{h(..)} | highlight | Colorizes enclosed text by log levels; applies to console output only. |
Example output
Loggers
Root logger
The root logger is the default logger. All log records not matched by a named logger are processed by it.| Field | Required | Description |
|---|---|---|
level | yes | Log level: off, error, warn, info, debug, trace. |
appenders | yes | List of appender names defined in the appenders section. |
Named loggers
Named loggers configure log levels for specific components. The logger name must match thetarget used in the node code.
| Field | Required | Default | Description |
|---|---|---|---|
level | no | inherited from parent | Log level |
appenders | no | [] | Appenders assigned to this logger. |
additive | no | true | If true, messages also propagate to the parent logger appenders (root). |
::. For example:
node;node::networkis a child ofnode.
additive: true, messages logged by node::network are written to:
- the appenders configured for
node::network; - the appenders of
node; - the appenders of the root logger.
Log levels
Ordered from most to least verbose:| Level | Description |
|---|---|
trace | Most detailed level. Used for execution flow tracing. |
debug | Debug information. |
info | Informational messages about normal operation. |
warn | Indication of a potential problem. |
error | Errors that don’t stop the node. |
off | Logging disabled. |
Available logger targets
The following targets can be configured in theloggers section:
| Target | Description |
|---|---|
node | Core node messages |
boot | Node bootstrap and initialization |
sync | Block synchronization |
node::network | Node networking |
node::network::neighbours | Neighbor tracking (high log volume) |
node::network::liteserver | Liteserver request handling |
node::validator::collator | Block collation |
adnl | ADNL network protocol |
adnl_query | ADNL query processing |
overlay | Overlay networks |
overlay_broadcast | Overlay broadcast messages |
rldp | RLDP protocol (reliable large datagrams) |
dht | Distributed hash table |
block | Block structure and config parsing |
executor | Transaction execution |
tvm | TON Virtual Machine |
validator | Validation (general) |
validator_manager | Validator management |
validate_query | Block and query validation |
validate_reject | Rejected block and query validation |
catchain | Catchain consensus protocol |
catchain_adnl_overlay | ADNL overlay for catchain |
catchain_network | Catchain network transport |
validator_session | Validator sessions |
consensus_common | Shared consensus logic |
storage | Data storage |
index | Data indexing |
ext_messages | External message handling |
telemetry | Telemetry and metrics |