This document provides a comprehensive reference for all configuration options available in Evolve. Understanding these configurations will help you tailor Evolve's behavior to your specific needs, whether you're running an aggregator, a full node, or a light client.
- DA-Only Sync Mode
- Introduction to Configurations
- Base Configuration
- Node Configuration (
node) - Pruning Configuration (
pruning) - Data Availability Configuration (
da) - P2P Configuration (
p2p) - RPC Configuration (
rpc) - Instrumentation Configuration (
instrumentation) - Logging Configuration (
log) - Signer Configuration (
signer)
Evolve supports running nodes that sync exclusively from the Data Availability (DA) layer without participating in P2P networking. This mode is useful for:
- Pure DA followers: Nodes that only need the canonical chain data from DA
- Resource optimization: Reducing network overhead by avoiding P2P gossip
- Simplified deployment: No need to configure or maintain P2P peer connections
- Isolated environments: Nodes that should not participate in P2P communication
To enable DA-only sync mode:
-
Leave P2P peers empty (default behavior):
p2p: peers: "" # Empty or omit this field entirely
-
Configure DA connection (required):
da: address: "your-da-service:port" namespace: "your-namespace" # ... other DA configuration
-
Optional: You can still configure P2P listen address for potential future connections, but without peers, no P2P networking will occur.
When running in DA-only mode, the node will:
- ✅ Sync blocks and headers from the DA layer
- ✅ Validate transactions and maintain state
- ✅ Serve RPC requests
- ❌ Not participate in P2P gossip or peer discovery
- ❌ Not share blocks with other nodes via P2P
- ❌ Not receive transactions via P2P (only from direct RPC submission)
Evolve configurations can be managed through a YAML file (typically evnode.yml located in ~/.evolve/config/ or <your_home_dir>/config/) and command-line flags. The system prioritizes configurations in the following order (highest priority first):
- Command-line flags: Override all other settings.
- YAML configuration file: Values specified in the
config.yamlfile. - Default values: Predefined defaults within Evolve.
Environment variables can also be used, typically prefixed with your executable's name (e.g., YOURAPP_CHAIN_ID="my-chain").
These are fundamental settings for your Evolve node.
Description: The root directory where Evolve stores its data, including the database and configuration files. This is a foundational setting that dictates where all other file paths are resolved from.
YAML: This option is not set within the YAML configuration file itself, as it specifies the location of the configuration file and other application data.
Command-line Flag:
--home <path>
Example: --home /mnt/data/evolve_node
Default: ~/.evolve (or a directory derived from the application name if defaultHome is customized).
Constant: FlagRootDir
Description: The path, relative to the Root Directory, where the Evolve database will be stored. This database contains blockchain state, blocks, and other critical node data.
YAML: Set this in your configuration file at the top level:
db_path: "data"Command-line Flag:
--rollkit.db_path <path>
Example: --rollkit.db_path "node_db"
Default: "data"
Constant: FlagDBPath
Description: The unique identifier for your chain. This ID is used to differentiate your network from others and is crucial for network communication and transaction validation.
YAML: Set this in your configuration file at the top level:
chain_id: "my-evolve-chain"Command-line Flag:
--chain_id <string>
Example: --chain_id "super_rollup_testnet_v1"
Default: "evolve"
Constant: FlagChainID
Settings related to the core behavior of the Evolve node, including its mode of operation and block production parameters.
YAML Section:
node:
# ... node configurations ...Description: If true, the node runs in aggregator mode. Aggregators are responsible for producing blocks by collecting transactions, ordering them, and proposing them to the network.
YAML:
node:
aggregator: trueCommand-line Flag:
--rollkit.node.aggregator (boolean, presence enables it)
Example: --rollkit.node.aggregator
Default: false
Constant: FlagAggregator
Description: If true, the node runs in light client mode. Light clients rely on full nodes for block headers and state information, offering a lightweight way to interact with the chain without storing all data.
YAML:
node:
light: trueCommand-line Flag:
--rollkit.node.light (boolean, presence enables it)
Example: --rollkit.node.light
Default: false
Constant: FlagLight
Description: The target time interval between consecutive blocks produced by an aggregator. This duration (e.g., "500ms", "1s", "5s") dictates the pace of block production.
YAML:
node:
block_time: "1s"Command-line Flag:
--rollkit.node.block_time <duration>
Example: --rollkit.node.block_time 2s
Default: "1s"
Constant: FlagBlockTime
Description: The maximum number of blocks that can be pending Data Availability (DA) submission. When this limit is reached, the aggregator pauses block production until some blocks are confirmed on the DA layer. Use 0 for no limit. This helps manage resource usage and DA layer capacity.
YAML:
node:
max_pending_blocks: 100Command-line Flag:
--rollkit.node.max_pending_blocks <uint64>
Example: --rollkit.node.max_pending_blocks 50
Default: 0 (no limit)
Constant: FlagMaxPendingBlocks
Description:
Enables lazy aggregation mode. In this mode, blocks are produced only when new transactions are available in the mempool or after the lazy_block_interval has passed. This optimizes resource usage by avoiding the creation of empty blocks during periods of inactivity.
YAML:
node:
lazy_mode: trueCommand-line Flag:
--rollkit.node.lazy_mode (boolean, presence enables it)
Example: --rollkit.node.lazy_mode
Default: false
Constant: FlagLazyAggregator
Description:
The maximum time interval between blocks when running in lazy aggregation mode (lazy_mode). This ensures that blocks are produced periodically even if there are no new transactions, keeping the chain active. This value is generally larger than block_time.
YAML:
node:
lazy_block_interval: "30s"Command-line Flag:
--rollkit.node.lazy_block_interval <duration>
Example: --rollkit.node.lazy_block_interval 1m
Default: "30s"
Constant: FlagLazyBlockTime
Description: Controls automatic pruning of stored block data and metadata from the local store. Pruning helps manage disk space by periodically removing old blocks and their associated state, while keeping a recent window of history for validation and queries.
Pruning Modes:
disabled(default): Archive mode - keeps all blocks and metadata indefinitelymetadata: Prunes only state metadata (execution state snapshots), keeps all blocksall: Prunes both blocks (headers, data, signatures) and metadata
How Pruning Works:
When pruning is enabled, the pruner runs at the configured interval and removes data beyond the retention window (pruning_keep_recent). The system uses intelligent batching to avoid overwhelming the node:
- Batch sizes are automatically calculated based on your
pruning_intervalandblock_time - Catch-up mode: When first enabling pruning on an existing node, smaller batches (2× blocks per interval) are used to gradually catch up without impacting performance
- Normal mode: Once caught up, larger batches (4× blocks per interval) are used for efficient maintenance
- Progress tracking: Pruning progress is saved after each batch, so restarts don't lose progress
Batch Size Examples:
With default settings (15 minute interval, 1 second blocks):
- Catch-up: ~1,800 blocks per run
- Normal: ~3,600 blocks per run
With high-throughput chain (15 minute interval, 100ms blocks):
- Catch-up: ~18,000 blocks per run
- Normal: ~36,000 blocks per run
YAML:
pruning:
pruning_mode: "all"
pruning_keep_recent: 100000
pruning_interval: "15m"Command-line Flags:
-
--evnode.pruning.pruning_mode <string>- Description: Pruning mode: 'disabled' (keep all), 'metadata' (prune state only), or 'all' (prune blocks and state)
- Example:
--evnode.pruning.pruning_mode all - Default:
"disabled"
-
--evnode.pruning.pruning_keep_recent <uint64>- Description: Number of most recent blocks/metadata to retain when pruning is enabled. Must be > 0 when pruning is enabled.
- Example:
--evnode.pruning.pruning_keep_recent 100000 - Default:
0
-
--evnode.pruning.pruning_interval <duration>- Description: How often to run the pruning process. Must be >= block_time when pruning is enabled. Larger intervals allow larger batch sizes.
- Example:
--evnode.pruning.pruning_interval 15m - Default:
0(disabled)
Constants: FlagPruningMode, FlagPruningKeepRecent, FlagPruningInterval
Important Notes:
- When DA is enabled (DA address is configured), pruning only removes blocks that have been confirmed on the DA layer (for mode
all) to ensure data safety - When DA is not enabled (no DA address configured), pruning proceeds based solely on store height, allowing nodes without DA to manage disk space
- The first pruning run after enabling may take several cycles to catch up, processing data in smaller batches
- Pruning cannot be undone - ensure your retention window is sufficient for your use case
- For production deployments, consider keeping at least 100,000 recent blocks
- The pruning interval should be balanced with your disk space growth rate
Parameters for connecting and interacting with the Data Availability (DA) layer, which Evolve uses to publish block data.
YAML Section:
da:
# ... DA configurations ...Description: The network address (host:port) of the Data Availability layer service. Evolve connects to this endpoint to submit and retrieve block data.
YAML:
da:
address: "localhost:26659"Command-line Flag:
--rollkit.da.address <string>
Example: --rollkit.da.address 192.168.1.100:26659
Default: "" (empty, must be configured if DA is used)
Constant: FlagDAAddress
Description: The authentication token required to interact with the DA layer service, if the service mandates authentication.
YAML:
da:
auth_token: "YOUR_DA_AUTH_TOKEN"Command-line Flag:
--rollkit.da.auth_token <string>
Example: --rollkit.da.auth_token mysecrettoken
Default: "" (empty)
Constant: FlagDAAuthToken
Description: The gas price to use for transactions submitted to the DA layer. A value of -1 indicates automatic gas price determination (if supported by the DA layer). Higher values may lead to faster inclusion of data.
YAML:
da:
gas_price: 0.025Command-line Flag:
--rollkit.da.gas_price <float64>
Example: --rollkit.da.gas_price 0.05
Default: -1 (automatic)
Constant: FlagDAGasPrice
Description: A multiplier applied to the gas price when retrying failed DA submissions. Values greater than 1 increase the gas price on retries, potentially improving the chances of successful inclusion.
YAML:
da:
gas_multiplier: 1.1Command-line Flag:
--rollkit.da.gas_multiplier <float64>
Example: --rollkit.da.gas_multiplier 1.5
Default: 1.0 (no multiplication)
Constant: FlagDAGasMultiplier
Description: Additional options passed to the DA layer when submitting data. The format and meaning of these options depend on the specific DA implementation being used. For example, with Celestia, this can include custom gas settings or other submission parameters in JSON format.
Note: If you configure multiple signing addresses (see DA Signing Addresses), the selected signing address will be automatically merged into these options as a JSON field signer_address (matching Celestia's TxConfig schema). If the base options are already valid JSON, the signing address is added to the existing object; otherwise, a new JSON object is created.
YAML:
da:
submit_options: '{"key":"value"}' # Example, format depends on DA layerCommand-line Flag:
--rollkit.da.submit_options <string>
Example: --rollkit.da.submit_options '{"custom_param":true}'
Default: "" (empty)
Constant: FlagDASubmitOptions
Description: A comma-separated list of signing addresses to use for DA blob submissions. When multiple addresses are provided, they will be used in round-robin fashion to prevent sequence mismatches that can occur with high-throughput Cosmos SDK-based DA layers. This is particularly useful for Celestia when submitting many transactions concurrently.
Each submission will select the next address in the list, and that address will be automatically added to the submit_options as signer_address. This ensures that the DA layer (e.g., celestia-node) uses the specified account for signing that particular blob submission.
Setup Requirements:
- All addresses must be loaded into the DA node's keyring and have sufficient funds for transaction fees
- For Celestia, see the guide on setting up multiple accounts in the DA node documentation
YAML:
da:
signing_addresses:
- "celestia1abc123..."
- "celestia1def456..."
- "celestia1ghi789..."Command-line Flag:
--evnode.da.signing_addresses <string>
Example: --rollkit.da.signing_addresses celestia1abc...,celestia1def...,celestia1ghi...
Default: [] (empty, uses default DA node behavior)
Constant: FlagDASigningAddresses
Behavior:
- If no signing addresses are configured, submissions use the DA layer's default signing behavior
- If one address is configured, all submissions use that address
- If multiple addresses are configured, they are used in round-robin order to distribute the load and prevent nonce/sequence conflicts
- The address selection is thread-safe for concurrent submissions
Description: The namespace ID used when submitting blobs (block data) to the DA layer. This helps segregate data from different chains or applications on a shared DA layer.
Note: If only namespace is provided, it will be used for both headers and data, otherwise the data_namespace will be used for data. Doing so allows speeding up light clients.
YAML:
da:
namespace: "MY_UNIQUE_NAMESPACE_ID"Command-line Flag:
--rollkit.da.namespace <string>
Example: --rollkit.da.namespace 0x1234567890abcdef
Default: "" (empty)
Constant: FlagDANamespace
Description: The namespace ID specifically for submitting transaction data to the DA layer. Transaction data is submitted separately from headers, enabling nodes to sync only the data they need. The namespace value is encoded by the node to ensure proper formatting and compatibility with the DA layer.
YAML:
da:
data_namespace: "DATA_NAMESPACE_ID"Command-line Flag:
--rollkit.da.data_namespace <string>
Example: --rollkit.da.data_namespace my_data_namespace
Default: Falls back to namespace if not set
Constant: FlagDADataNamespace
Description: The average block time of the Data Availability chain (specified as a duration string, e.g., "15s", "1m"). This value influences:
- The frequency of DA layer syncing.
- The maximum backoff time for retrying DA submissions.
- Calculation of transaction expiration when multiplied by
mempool_ttl.
YAML:
da:
block_time: "6s"Command-line Flag:
--rollkit.da.block_time <duration>
Example: --rollkit.da.block_time 12s
Default: "6s"
Constant: FlagDABlockTime
Description: The number of DA blocks after which a transaction submitted to the DA layer is considered expired and potentially dropped from the DA layer's mempool. This also controls the retry backoff timing for DA submissions.
YAML:
da:
mempool_ttl: 20Command-line Flag:
--rollkit.da.mempool_ttl <uint64>
Example: --rollkit.da.mempool_ttl 30
Default: 20
Constant: FlagDAMempoolTTL
Description:
Per-request timeout applied to DA GetIDs and Get RPC calls while retrieving blobs. Increase this value if your DA endpoint has high latency to avoid premature failures; decrease it to make the syncer fail fast and free resources sooner when the DA node becomes unresponsive.
YAML:
da:
request_timeout: "30s"Command-line Flag:
--rollkit.da.request_timeout <duration>
Example: --rollkit.da.request_timeout 45s
Default: "30s"
Constant: FlagDARequestTimeout
Description: Controls how blocks are batched before submission to the DA layer. Different strategies offer trade-offs between latency, cost efficiency, and throughput. All strategies pass through the DA submitter which performs additional size checks and may further split batches that exceed the DA layer's blob size limit.
Available strategies:
immediate: Submits as soon as any items are available. Best for low-latency requirements where cost is not a concern.size: Waits until the batch reaches a size threshold (fraction of max blob size). Best for maximizing blob utilization and minimizing costs when latency is flexible.time: Waits for a time interval before submitting. Provides predictable submission timing aligned with DA block times.adaptive: Balances between size and time constraints—submits when either the size threshold is reached OR the max delay expires. Recommended for most production deployments as it optimizes both cost and latency.
YAML:
da:
batching_strategy: "time"Command-line Flag:
--rollkit.da.batching_strategy <string>
Example: --rollkit.da.batching_strategy adaptive
Default: "time"
Constant: FlagDABatchingStrategy
Description:
The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the size and adaptive strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size.
Higher values maximize blob utilization and reduce costs but may increase latency. Lower values reduce latency but may result in less efficient blob usage.
YAML:
da:
batch_size_threshold: 0.8Command-line Flag:
--rollkit.da.batch_size_threshold <float64>
Example: --rollkit.da.batch_size_threshold 0.9
Default: 0.8 (80% of max blob size)
Constant: FlagDABatchSizeThreshold
Description:
The maximum time to wait before submitting a batch regardless of size. Applies to the time and adaptive strategies. Lower values reduce latency but may increase costs due to smaller batches. This value is typically aligned with the DA chain's block time to ensure submissions land in consecutive blocks.
When set to 0, defaults to the DA BlockTime value.
YAML:
da:
batch_max_delay: "6s"Command-line Flag:
--rollkit.da.batch_max_delay <duration>
Example: --rollkit.da.batch_max_delay 12s
Default: 0 (uses DA BlockTime)
Constant: FlagDABatchMaxDelay
Description: The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum.
YAML:
da:
batch_min_items: 1Command-line Flag:
--rollkit.da.batch_min_items <uint64>
Example: --rollkit.da.batch_min_items 5
Default: 1
Constant: FlagDABatchMinItems
Settings for peer-to-peer networking, enabling nodes to discover each other, exchange blocks, and share transactions.
YAML Section:
p2p:
# ... P2P configurations ...Description: The network address (host:port) on which the Evolve node will listen for incoming P2P connections from other nodes.
YAML:
p2p:
listen_address: "0.0.0.0:7676"Command-line Flag:
--rollkit.p2p.listen_address <string>
Example: --rollkit.p2p.listen_address /ip4/127.0.0.1/tcp/26656
Default: "/ip4/0.0.0.0/tcp/7676"
Constant: FlagP2PListenAddress
Description: A comma-separated list of peer addresses (e.g., multiaddresses) that the node will attempt to connect to for bootstrapping its P2P connections. These are often referred to as seed nodes.
For DA-only sync mode: Leave this field empty (default) to disable P2P networking entirely. When no peers are configured, the node will sync exclusively from the Data Availability layer without participating in P2P gossip, peer discovery, or block sharing. This is useful for nodes that only need to follow the canonical chain data from DA.
YAML:
p2p:
peers: "/ip4/some_peer_ip/tcp/7676/p2p/PEER_ID1,/ip4/another_peer_ip/tcp/7676/p2p/PEER_ID2"
# For DA-only sync, leave peers empty:
# peers: ""Command-line Flag:
--rollkit.p2p.peers <string>
Example: --rollkit.p2p.peers /dns4/seed.example.com/tcp/26656/p2p/12D3KooW...
Default: "" (empty - enables DA-only sync mode)
Constant: FlagP2PPeers
Description: A comma-separated list of peer IDs that the node should block from connecting. This can be used to prevent connections from known malicious or problematic peers.
YAML:
p2p:
blocked_peers: "PEER_ID_TO_BLOCK1,PEER_ID_TO_BLOCK2"Command-line Flag:
--rollkit.p2p.blocked_peers <string>
Example: --rollkit.p2p.blocked_peers 12D3KooW...,12D3KooX...
Default: "" (empty)
Constant: FlagP2PBlockedPeers
Description: A comma-separated list of peer IDs that the node should exclusively allow connections from. If this list is non-empty, only peers in this list will be able to connect.
YAML:
p2p:
allowed_peers: "PEER_ID_TO_ALLOW1,PEER_ID_TO_ALLOW2"Command-line Flag:
--rollkit.p2p.allowed_peers <string>
Example: --rollkit.p2p.allowed_peers 12D3KooY...,12D3KooZ...
Default: "" (empty, allow all unless blocked)
Constant: FlagP2PAllowedPeers
Settings for the Remote Procedure Call (RPC) server, which allows clients and applications to interact with the Evolve node.
YAML Section:
rpc:
# ... RPC configurations ...Description: The network address (host:port) to which the RPC server will bind and listen for incoming requests.
YAML:
rpc:
address: "127.0.0.1:7331"Command-line Flag:
--rollkit.rpc.address <string>
Example: --rollkit.rpc.address 0.0.0.0:26657
Default: "127.0.0.1:7331"
Constant: FlagRPCAddress
Description: If true, enables the Data Availability (DA) visualization endpoints that provide real-time monitoring of blob submissions to the DA layer. This includes a web-based dashboard and REST API endpoints for tracking submission statistics, monitoring DA health, and analyzing blob details. Only aggregator nodes submit data to the DA layer, so this feature is most useful when running in aggregator mode.
YAML:
rpc:
enable_da_visualization: trueCommand-line Flag:
--rollkit.rpc.enable_da_visualization (boolean, presence enables it)
Example: --rollkit.rpc.enable_da_visualization
Default: false
Constant: FlagRPCEnableDAVisualization
See the DA Visualizer Guide for detailed information on using this feature.
Returns 200 OK if the process is alive and can access the store.
curl http://localhost:7331/health/liveReturns 200 OK if the node can serve correct data. Checks:
- P2P is listening (if enabled)
- Has synced blocks
- Not too far behind network
- Non-aggregators: has peers
- Aggregators: producing blocks at expected rate
curl http://localhost:7331/health/readyConfigure max blocks behind:
node:
readiness_max_blocks_behind: 15Settings for enabling and configuring metrics and profiling endpoints, useful for monitoring node performance and debugging.
YAML Section:
instrumentation:
# ... instrumentation configurations ...Description: If true, enables the Prometheus metrics endpoint, allowing Prometheus to scrape operational data from the Evolve node.
YAML:
instrumentation:
prometheus: trueCommand-line Flag:
--rollkit.instrumentation.prometheus (boolean, presence enables it)
Example: --rollkit.instrumentation.prometheus
Default: false
Constant: FlagPrometheus
Description: The network address (host:port) where the Prometheus metrics server will listen for scraping requests.
See Metrics for more details on what metrics are exposed.
YAML:
instrumentation:
prometheus_listen_addr: ":2112"Command-line Flag:
--rollkit.instrumentation.prometheus_listen_addr <string>
Example: --rollkit.instrumentation.prometheus_listen_addr 0.0.0.0:9090
Default: ":2112"
Constant: FlagPrometheusListenAddr
Description: The maximum number of simultaneous connections allowed for the metrics server (e.g., Prometheus endpoint).
YAML:
instrumentation:
max_open_connections: 100Command-line Flag:
--rollkit.instrumentation.max_open_connections <int>
Example: --rollkit.instrumentation.max_open_connections 50
Default: (Refer to DefaultInstrumentationConfig() in code, typically a reasonable number like 100)
Constant: FlagMaxOpenConnections
Description: If true, enables the pprof HTTP endpoint, which provides runtime profiling data for debugging performance issues. Accessing these endpoints can help diagnose CPU and memory usage.
YAML:
instrumentation:
pprof: trueCommand-line Flag:
--rollkit.instrumentation.pprof (boolean, presence enables it)
Example: --rollkit.instrumentation.pprof
Default: false
Constant: FlagPprof
Description: The network address (host:port) where the pprof HTTP server will listen for profiling requests.
YAML:
instrumentation:
pprof_listen_addr: "localhost:6060"Command-line Flag:
--rollkit.instrumentation.pprof_listen_addr <string>
Example: --rollkit.instrumentation.pprof_listen_addr 0.0.0.0:6061
Default: "localhost:6060"
Constant: FlagPprofListenAddr
Settings that control the verbosity and format of log output from the Evolve node. These are typically set via global flags.
YAML Section:
log:
# ... logging configurations ...Description:
Sets the minimum severity level for log messages to be displayed. Common levels include debug, info, warn, error.
YAML:
log:
level: "info"Command-line Flag:
--log.level <string> (Note: some applications might use a different flag name like --log_level)
Example: --log.level debug
Default: "info"
Constant: FlagLogLevel (value: "evolve.log.level", but often overridden by global app flags)
Description:
Sets the format for log output. Common formats include text (human-readable) and json (structured, machine-readable).
YAML:
log:
format: "text"Command-line Flag:
--log.format <string> (Note: some applications might use a different flag name like --log_format)
Example: --log.format json
Default: "text"
Constant: FlagLogFormat (value: "evolve.log.format", but often overridden by global app flags)
Description: If true, enables the inclusion of stack traces in error logs. This can be very helpful for debugging issues by showing the call stack at the point of an error.
YAML:
log:
trace: falseCommand-line Flag:
--log.trace (boolean, presence enables it; Note: some applications might use a different flag name like --log_trace)
Example: --log.trace
Default: false
Constant: FlagLogTrace (value: "evolve.log.trace", but often overridden by global app flags)
Settings related to the signing mechanism used by the node, particularly for aggregators that need to sign blocks.
YAML Section:
signer:
# ... signer configurations ...Description:
Specifies the type of remote signer to use. Common options might include file (for key files) or grpc (for connecting to a remote signing service).
YAML:
signer:
signer_type: "file"Command-line Flag:
--rollkit.signer.signer_type <string>
Example: --rollkit.signer.signer_type grpc
Default: (Depends on application, often "file" or none if not an aggregator)
Constant: FlagSignerType
Description:
The path to the signer file (if signer_type is file) or the address of the remote signer service (if signer_type is grpc or similar).
YAML:
signer:
signer_path: "/path/to/priv_validator_key.json" # For file signer
# signer_path: "localhost:9000" # For gRPC signerCommand-line Flag:
--rollkit.signer.signer_path <string>
Example: --rollkit.signer.signer_path ./config
Default: (Depends on application)
Constant: FlagSignerPath
Description:
The passphrase required to decrypt or access the signer key, particularly if using a file signer and the key is encrypted, or if the aggregator mode is enabled and requires it. This flag is not directly a field in the SignerConfig struct but is used in conjunction with it.
YAML: This is typically not stored in the YAML file for security reasons but provided via flag or environment variable.
Command-line Flag:
--rollkit.signer.passphrase <string>
Example: --rollkit.signer.passphrase "mysecretpassphrase"
Default: "" (empty)
Constant: FlagSignerPassphrase
Note: Be cautious with providing passphrases directly on the command line in shared environments due to history logging. Environment variables or secure input methods are often preferred.
This reference should help you configure your Evolve node effectively. Always refer to the specific version of Evolve you are using, as options and defaults may change over time.