Skip to main content
Console supports multiple configuration methods - choose based on your deployment needs:
  • Console API/CLI/Terraform - recommended for production and GitOps environments requiring dynamic configuration management. Enables real-time updates without service interruption.
  • Console UI - ideal for development, testing, and quick configuration changes through the Console interface. Changes are not version-controlled or easily repeatable.
  • YAML/Environment variables - best for initial Console setup and static configurations that rarely change. Requires container restart to apply configuration changes.
This page focuses on YAML and environment variable configurations. For API/CLI/Terraform methods see Console reference.
Clusters defined in YAML take precedence: any UI changes to these clusters will be re-set on restart.Clusters created through the UI are independent and persist regardless of YAML configurations.We recommend not mixing configuration methods to avoid confusion and unwanted overrides.

Ready-to-use configurations

Complete production-ready setup for Confluent Cloud

This demonstrates a complete configuration for Conduktor Console including database, monitoring, authentication and Confluent Cloud cluster connections with SASL_SSL/PLAIN security, Schema Registry, and Kafka Connect.
  • YAML file
  • Environment variables
database:
  hosts:
    - host: 'postgresql'
      port: 5432
  name: 'conduktor'
  username: 'conduktor'
  password: '<database-password>'
  connection_timeout: 30 # in seconds

monitoring:
  cortex-url: 'http://conduktor-monitoring:9009/'
  alert-manager-url: 'http://conduktor-monitoring:9010/'
  callback-url: 'http://conduktor-console:8080/monitoring/api/'
  notifications-callback-url: 'http://localhost:8080'

admin:
  email: '<admin-email>'
  password: '<admin-password>' # Must be at least 8 characters with mixed case, numbers, and symbols

sso:
  oauth2:
    - name: 'auth0'
      client-id: '<client-id>'
      client-secret: '<client-secret>'
      openid:
        issuer: 'https://<auth-domain>'
      scopes: # Optional
        - 'openid'
        - 'profile'
        - 'email'
      groups-claim: 'groups' # Optional - Default: 'roles'

auth:
  local-users: # Optional - Additional local users beyond admin
    - email: 'user@example.com'
      password: '<user-password>'
    - email: 'another@example.com'
      password: '<another-password>'

clusters:
  - id: 'confluent-prod'
    name: 'Confluent Production'
    color: '#FF5733' # Optional
    icon: 'kafka' # Optional
    bootstrapServers: 'pkc-xxxxx.region.aws.confluent.cloud:9092'
    properties: |
      security.protocol=SASL_SSL
      sasl.mechanism=PLAIN
      sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<cluster-api-key>" password="<cluster-api-secret>";
    kafkaFlavor:
      type: "Confluent"
      key: "<cloud-api-key>" # Confluent Cloud API Key, NOT cluster API Key
      secret: "<cloud-api-secret>" # Confluent Cloud API Secret, NOT cluster API Secret
      confluentEnvironmentId: "<environment-id>"
      confluentClusterId: "<cluster-id>"
      organizationId: "<organization-id>" # Optional - Required for RBAC role bindings
      schemaRegistryId: "<schema-registry-id>" # Optional - Required if managing Schema Registry via API
      enableRbacRoleBindings: true # Optional - Default: false
    schemaRegistry:
      url: 'https://psrc-xxxxx.region.aws.confluent.cloud'
      security:
        username: '<sr-api-key>'
        password: '<sr-api-secret>'
    kafkaConnects:
      - id: 'kafka-connect'
        name: 'My Kafka Connect'
        url: 'http://localhost:8083'
        security:
          username: '<connect-username>'
          password: '<connect-password>'

license: "<license-key>" # Enterprise license key

Complete production-ready setup for Aiven

This demonstrates a complete configuration for Conduktor Console including database, monitoring, authentication and Aiven cluster connections using mTLS with the Aiven flavor. You should have three files:
  • Your access key (in the keystore.jks file).
  • Your access certificate (in the keystore.jks file).
  • Your CA certificate (in the truststore.jks file).
Make sure the content is on a single line.
  • YAML file
  • Environment variables
database:
  hosts:
    - host: 'postgresql'
      port: 5432
  name: 'conduktor'
  username: 'conduktor'
  password: '<database-password>'
  connection_timeout: 30 # in seconds

monitoring:
  cortex-url: 'http://conduktor-monitoring:9009/'
  alert-manager-url: 'http://conduktor-monitoring:9010/'
  callback-url: 'http://conduktor-console:8080/monitoring/api/'
  notifications-callback-url: 'http://localhost:8080'

admin:
  email: '<admin-email>'
  password: '<admin-password>' # Must be at least 8 characters with mixed case, numbers, and symbols

sso:
  oauth2:
    - name: 'auth0'
      client-id: '<client-id>'
      client-secret: '<client-secret>'
      openid:
        issuer: 'https://<auth-domain>'
      scopes: # Optional
        - 'openid'
        - 'profile'
        - 'email'
      groups-claim: 'groups' # Optional - Default: 'roles'

auth:
  local-users: # Optional - Additional local users beyond admin
    - email: 'user@example.com'
      password: '<user-password>'
    - email: 'another@example.com'
      password: '<another-password>'

clusters:
  - id: 'aiven-ssl'
    name: 'Aiven SSL'
    color: '#FF5733' # Optional
    icon: 'kafka' # Optional
    bootstrapServers: 'kafka-09ba.aivencloud.com:21650'
    properties: |
      security.protocol=SSL
      ssl.truststore.type=PEM
      ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----
      ssl.keystore.type=PEM
      ssl.keystore.key=-----BEGIN PRIVATE KEY----- <access-key> -----END PRIVATE KEY-----
      ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- <access-certificate> -----END CERTIFICATE-----
    kafkaFlavor:
      type: "Aiven"
      apiToken: "<api-token>"
      project: "<project-name>"
      serviceName: "kafka-xxxx" # kafka cluster id (service name)

license: "<license-key>" # Enterprise license key

Kafka Cluster configuration

  • None (PLAINTEXT)
  • SASL
  • SSL
  • AWS IAM (MSK)
Basic connection without authentication or encryption.
  • YAML file
  • Environment variables
clusters:
  - id: 'local-kafka'
    name: 'Local Development'
    bootstrapServers: 'localhost:9092'

Schema Registry configuration

To enable Schema Registry support, attach these code examples to any of the cluster configurations above.
  • Confluent-like Schema Registry
  • AWS Glue Schema Registry
  • No authentication
  • Basic authentication
  • Bearer token authentication
  • SSL authentication
  • YAML file
  • Environment variables
    schemaRegistry:
      url: 'https://psrc-xxxx.region.aws.confluent.cloud'

Kafka Connect configuration

To add Kafka Connect to your cluster configuration use the code examples below.
  • No authentication
  • Basic authentication
  • Bearer token authentication
  • SSL authentication
  • YAML file
  • Environment variables
  kafkaConnects:
    - id: 'kafka-connect'
      name: 'My Kafka Connect'
      url: 'http://localhost:8083'
      headers: 'myHeader=myValue'
      ignoreUntrustedCertificate: false

ksqlDB configuration

To add ksqlDB to your cluster configuration use the code examples below.
  • No authentication
  • Basic authentication
  • Bearer token authentication
  • SSL authentication
  • YAML file
  • Environment variables
  ksqlDBs:
    - id: 'ksqldb-basic'
      name: 'My ksqlDB Server'
      url: 'http://localhost:8088'
      ignoreUntrustedCertificate: false
      headers: 'myHeader=myValue'

Provider configuration

To enable enhanced provider-specific capabilities, attach the following snippets to any of the above cluster configurations.
  • Confluent Cloud flavor
  • Aiven Cloud flavor
  • Gateway flavor
Connect to Confluent Cloud with enhanced management capabilities for service accounts, API keys, and ACLs.
  • YAML file
  • Environment variables
    kafkaFlavor:
      type: "Confluent"
      key: "<cloud-api-key>" # Confluent Cloud API Key, NOT cluster API Key
      secret: "<cloud-api-secret>" # Confluent Cloud API Secret, NOT cluster API Secret
      confluentEnvironmentId: "<environment-id>"
      confluentClusterId: "<cluster-id>"
      organizationId: "<organization-id>" # Optional - Required for RBAC role bindings
      schemaRegistryId: "<schema-registry-id>" # Optional - Required if managing Schema Registry via API
      enableRbacRoleBindings: true # Optional - Default: false

Logging configuration

  • Environment variables
  • Config file

Global log settings

Configure Console-wide logging behavior using these environment variables:
Environment variableDefault valueDescription
CDK_ROOT_LOG_LEVELINFOGlobal Console log level, one of OFF, ERROR, WARN, INFO, DEBUG
CDK_ROOT_LOG_FORMATTEXTLog format, one of TEXT or JSON
CDK_ROOT_LOG_COLORtrueEnable color in logs when possible
For backward compatibility, CDK_DEBUG: true is still supported and is equivalent to CDK_ROOT_LOG_LEVEL: DEBUG.

Module-specific log settings

Configure logging levels for individual Console modules:Possible values for all of them are: OFF, ERROR, WARN, INFO, DEBUG, and TRACE.
Environment variableDefault valueDescription
PLATFORM_STARTUP_LOG_LEVELINFOSet the setup/configuration process logs level. By default, it is set to INFO, but switches to DEBUG if CDK_ROOT_LOG_LEVEL: DEBUG.
CONSOLE_ROOT_LOG_LEVELCDK_ROOT_LOG_LEVELLogs related to any actions done in the Console UI
PLATFORM_API_ROOT_LOG_LEVELCDK_ROOT_LOG_LEVELInternal platform API logs (health endpoints)

Log level inheritance

If you don’t explicitly set the log level for a module, it will inherit the CDK_ROOT_LOG_LEVEL.For instance, if you only set
CDK_ROOT_LOG_LEVEL: DEBUG
# CONSOLE_ROOT_LOG_LEVEL isn't set
Then, CONSOLE_ROOT_LOG_LEVEL will be automatically set to DEBUG.Similarly, if you set:
CDK_ROOT_LOG_LEVEL: INFO
CONSOLE_ROOT_LOG_LEVEL: DEBUG
Then, CONSOLE_ROOT_LOG_LEVEL will still be set to DEBUG, and isn’t overridden.

JSON structured logging

Enable structured logging by setting CDK_ROOT_LOG_FORMAT=JSON. Logs will use this JSON format:
{
  "timestamp": "2024-06-14T10:09:25.802542476+00:00",
  "level": "<log level>",
  "message": "<log message>",
  "logger": "<logger name>",
  "thread": "<logger thread>",
  "stack_trace": "<throwable>",
  "mdc": {
    "key": "value"
  }
}
The log timestamp is encoded in ISO-8601 format. When structured logging is enabled, CDK_ROOT_LOG_COLOR is always ignored.

Runtime logger API

Console provides runtime log level management via REST API. This requires an admin API key.
The loggerName filter used to GET or SET a logger level uses a contains so you can either use the fully qualified cardinal name or just a part of it, meaning that the filter authenticator will match io.conduktor.authenticator or io.conduktor.authenticator.ConduktorUserProfile loggers, among others.The logLevel is case-insensitive and can be: TRACE, DEBUG, INFO, WARN, ERROR, OFF.
I