Console supports multiple configuration methods - choose based on your deployment needs:
Console API/CLI/Terraform - recommended for production and GitOps environments requiring dynamic configuration management. Enables real-time updates without service interruption.
Console UI - ideal for development, testing, and quick configuration changes through the Console interface. Changes are not version-controlled or easily repeatable.
YAML/Environment variables - best for initial Console setup and static configurations that rarely change. Requires container restart to apply configuration changes.
This page focuses on YAML and environment variable configurations. For API/CLI/Terraform methods see Console reference .
If you want to configure clusters with a GitOps approach, we recommend using Console API .
Clusters created through the UI are independent and persist regardless of YAML configurations.We recommend not mixing configuration methods to avoid confusion and unwanted overrides.
Ready-to-use configurations
Complete production-ready setup for Confluent Cloud
This demonstrates a complete configuration for Conduktor Console including database, monitoring, authentication and Confluent Cloud cluster connections with SASL_SSL/PLAIN security, Schema Registry, and Kafka Connect.
YAML file
Environment variables
database :
hosts :
- host : 'postgresql'
port : 5432
name : 'conduktor'
username : 'conduktor'
password : '<database-password>'
connection_timeout : 30 # in seconds
monitoring :
cortex-url : 'http://conduktor-monitoring:9009/'
alert-manager-url : 'http://conduktor-monitoring:9010/'
callback-url : 'http://conduktor-console:8080/monitoring/api/'
notifications-callback-url : 'http://localhost:8080'
admin :
email : '<admin-email>'
password : '<admin-password>' # Must be at least 8 characters with mixed case, numbers, and symbols
sso :
oauth2 :
- name : 'auth0'
client-id : '<client-id>'
client-secret : '<client-secret>'
openid :
issuer : 'https://<auth-domain>'
scopes : # Optional
- 'openid'
- 'profile'
- 'email'
groups-claim : 'groups' # Optional - Default: 'roles'
auth :
local-users : # Optional - Additional local users beyond admin
- email : '[email protected] '
password : '<user-password>'
- email : '[email protected] '
password : '<another-password>'
clusters :
- id : 'confluent-prod'
name : 'Confluent Production'
color : '#FF5733' # Optional
icon : 'kafka' # Optional
bootstrapServers : 'pkc-xxxxx.region.aws.confluent.cloud:9092'
properties : |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<cluster-api-key>" password="<cluster-api-secret>";
kafkaFlavor :
type : "Confluent"
key : "<cloud-api-key>" # Confluent Cloud API Key, NOT cluster API Key
secret : "<cloud-api-secret>" # Confluent Cloud API Secret, NOT cluster API Secret
confluentEnvironmentId : "<environment-id>"
confluentClusterId : "<cluster-id>"
organizationId : "<organization-id>" # Optional - Required for RBAC role bindings
schemaRegistryId : "<schema-registry-id>" # Optional - Required if managing Schema Registry via API
enableRbacRoleBindings : true # Optional - Default: false
schemaRegistry :
url : 'https://psrc-xxxxx.region.aws.confluent.cloud'
security :
username : '<sr-api-key>'
password : '<sr-api-secret>'
kafkaConnects :
- id : 'kafka-connect'
name : 'My Kafka Connect'
url : 'http://localhost:8083'
security :
username : '<connect-username>'
password : '<connect-password>'
license : "<license-key>" # Enterprise license key
environment:
CDK_CLUSTERS_0_ID: 'kafka'
CDK_CLUSTERS_0_NAME: 'Kafka'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'localhost:9092'
CDK_CLUSTERS_0_KAFKACONNECTS_0_ID: 'kafka-connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_NAME: 'My Kafka Connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_URL: 'http://localhost:8083'
CDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_USERNAME: '<username>'
CDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_PASSWORD: '<password>'
Amazon MSK with IAM authentication
Connect to an MSK cluster with IAM authentication. You can use explicit credentials or inherit them from the environment.
Using explicit credentials:
clusters :
- id : 'amazon-msk-iam'
name : 'Amazon MSK IAM'
bootstrapServers : 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties : |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
Complete production-ready setup for Aiven
This demonstrates a complete configuration for Conduktor Console including database, monitoring, authentication and Aiven cluster connections using mTLS with the Aiven flavor.
You should have three files:
Your access key (in the keystore.jks file).
Your access certificate (in the keystore.jks file).
Your CA certificate (in the truststore.jks file).
Make sure the content is on a single line.
YAML file
Environment variables
database :
hosts :
- host : 'postgresql'
port : 5432
name : 'conduktor'
username : 'conduktor'
password : '<database-password>'
connection_timeout : 30 # in seconds
monitoring :
cortex-url : 'http://conduktor-monitoring:9009/'
alert-manager-url : 'http://conduktor-monitoring:9010/'
callback-url : 'http://conduktor-console:8080/monitoring/api/'
notifications-callback-url : 'http://localhost:8080'
admin :
email : '<admin-email>'
password : '<admin-password>' # Must be at least 8 characters with mixed case, numbers, and symbols
sso :
oauth2 :
- name : 'auth0'
client-id : '<client-id>'
client-secret : '<client-secret>'
openid :
issuer : 'https://<auth-domain>'
scopes : # Optional
- 'openid'
- 'profile'
- 'email'
groups-claim : 'groups' # Optional - Default: 'roles'
auth :
local-users : # Optional - Additional local users beyond admin
- email : '[email protected] '
password : '<user-password>'
- email : '[email protected] '
password : '<another-password>'
clusters :
- id : 'aiven-ssl'
name : 'Aiven SSL'
color : '#FF5733' # Optional
icon : 'kafka' # Optional
bootstrapServers : 'kafka-09ba.aivencloud.com:21650'
properties : |
security.protocol=SSL
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----
ssl.keystore.type=PEM
ssl.keystore.key=-----BEGIN PRIVATE KEY----- <access-key> -----END PRIVATE KEY-----
ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- <access-certificate> -----END CERTIFICATE-----
kafkaFlavor :
type : "Aiven"
apiToken : "<api-token>"
project : "<project-name>"
serviceName : "kafka-xxxx" # kafka cluster id (service name)
license : "<license-key>" # Enterprise license key
environment:
# Enterprise license key
CDK_LICENSE: '<license-key>'
# Database configuration
CDK_DATABASE_URL: 'postgresql://conduktor:<database-password>@postgresql:5432/conduktor'
# Connection to Conduktor Cortex container
CDK_MONITORING_CORTEX-URL: 'http://conduktor-monitoring:9009/'
CDK_MONITORING_ALERT-MANAGER-URL: 'http://conduktor-monitoring:9010/'
CDK_MONITORING_CALLBACK-URL: 'http://conduktor-console:8080/monitoring/api/'
CDK_MONITORING_NOTIFICATIONS-CALLBACK-URL: 'http://localhost:8080'
# Admin username/password
CDK_ADMIN_EMAIL: '<admin-email>'
CDK_ADMIN_PASSWORD: '<admin-password>' # Must be at least 8 characters with mixed case, numbers, and symbols
# SSO configuration
CDK_SSO_OAUTH2_0_NAME: 'auth0'
CDK_SSO_OAUTH2_0_CLIENT-ID: '<client-id>'
CDK_SSO_OAUTH2_0_CLIENT-SECRET: '<client-secret>'
CDK_SSO_OAUTH2_0_OPENID_ISSUER: 'https://<auth-domain>'
CDK_SSO_OAUTH2_0_SCOPES: 'openid,profile,email' # Optional - Comma-separated list
CDK_SSO_OAUTH2_0_GROUPS-CLAIM: 'groups' # Optional
# Local users configuration (optional - additional users beyond admin)
CDK_AUTH_LOCAL-USERS_0_EMAIL: '[email protected] '
CDK_AUTH_LOCAL-USERS_0_PASSWORD: '<user-password>'
CDK_AUTH_LOCAL-USERS_1_EMAIL: '[email protected] '
CDK_AUTH_LOCAL-USERS_1_PASSWORD: '<another-password>'
# Kafka cluster configuration
CDK_CLUSTERS_0_ID: 'aiven-ssl'
CDK_CLUSTERS_0_NAME: 'Aiven SSL'
CDK_CLUSTERS_0_COLOR: '#FF5733' # Optional
CDK_CLUSTERS_0_ICON: 'kafka' # Optional
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'kafka-09ba.aivencloud.com:21650'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SSL\nssl.truststore.type=PEM\nssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----\nssl.keystore.type=PEM\nssl.keystore.key=-----BEGIN PRIVATE KEY----- <access-key> -----END PRIVATE KEY-----\nssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- <access-certificate> -----END CERTIFICATE-----"
# Aiven flavor configuration
CDK_CLUSTERS_0_KAFKAFLAVOR_TYPE: "Aiven"
CDK_CLUSTERS_0_KAFKAFLAVOR_APITOKEN: "<api-token>"
CDK_CLUSTERS_0_KAFKAFLAVOR_PROJECT: "<project-name>"
CDK_CLUSTERS_0_KAFKAFLAVOR_SERVICENAME: "kafka-xxxx"
Kafka Cluster configuration
None (PLAINTEXT)
SASL
SSL
AWS IAM (MSK)
Basic connection without authentication or encryption. YAML file
Environment variables
clusters :
- id : 'local-kafka'
name : 'Local Development'
bootstrapServers : 'localhost:9092'
CDK_CLUSTERS_0_ID: 'local-kafka'
CDK_CLUSTERS_0_NAME: 'Local Development'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'localhost:9092'
SASL can be used with PLAINTEXT or SSL transport, supporting multiple mechanisms: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI and OAUTHBEARER. YAML file
Environment variables
clusters :
- id : 'sasl-plain-plaintext'
name : 'SASL PLAIN (Plaintext)'
bootstrapServers : 'broker.example.com:9092'
properties : |
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<sasl-username>" password="<sasl-password>";
CDK_CLUSTERS_0_ID: 'sasl-plain-plaintext'
CDK_CLUSTERS_0_NAME: 'SASL PLAIN (Plaintext)'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker.example.com:9092'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SASL_PLAINTEXT\nsasl.mechanism=PLAIN\nsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username= \" <sasl-username> \" password= \" <sasl-password> \" ;"
YAML file
Environment variables
clusters :
- id : 'sasl-scram-plaintext'
name : 'SASL SCRAM (Plaintext)'
bootstrapServers : 'broker.example.com:9092'
properties : |
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="<sasl-username>" password="<sasl-password>";
CDK_CLUSTERS_0_ID: 'sasl-scram-plaintext'
CDK_CLUSTERS_0_NAME: 'SASL SCRAM (Plaintext)'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker.example.com:9092'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SASL_PLAINTEXT\nsasl.mechanism=SCRAM-SHA-256\nsasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username= \" <sasl-username> \" password= \" <sasl-password> \" ;"
YAML file
Environment variables
clusters :
- id : 'sasl-plain-ssl'
name : 'SASL PLAIN (SSL)'
bootstrapServers : 'broker.example.com:9093'
properties : |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<sasl-username>" password="<sasl-password>";
CDK_CLUSTERS_0_ID: 'sasl-plain-ssl'
CDK_CLUSTERS_0_NAME: 'SASL PLAIN (SSL)'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker.example.com:9093'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SASL_SSL\nsasl.mechanism=PLAIN\nsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username= \" <sasl-username> \" password= \" <sasl-password> \" ;"
YAML file
Environment variables
clusters :
- id : 'sasl-scram-ssl'
name : 'SASL SCRAM (SSL)'
bootstrapServers : 'broker.example.com:9093'
properties : |
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="<sasl-username>" password="<sasl-password>";
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----
CDK_CLUSTERS_0_ID: 'sasl-scram-ssl'
CDK_CLUSTERS_0_NAME: 'SASL SCRAM (SSL)'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker.example.com:9093'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SASL_SSL\nsasl.mechanism=SCRAM-SHA-512\nsasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username= \" <sasl-username> \" password= \" <sasl-password> \" ;\nssl.truststore.type=PEM\nssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----"
OIDC authentication (available since Kafka 3.1 - KIP-768 ) YAML file
Environment variables
clusters :
- id : 'sasl-oauth-ssl'
name : 'SASL OAuth (SSL)'
bootstrapServers : 'broker.example.com:9093'
properties : |
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId="<oauth-client-id>" clientSecret="<oauth-client-secret>";
sasl.oauthbearer.token.endpoint.url=https://auth.example.com/oauth2/token
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
CDK_CLUSTERS_0_ID: 'sasl-oauth-ssl'
CDK_CLUSTERS_0_NAME: 'SASL OAuth (SSL)'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker.example.com:9093'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SASL_SSL\nsasl.mechanism=OAUTHBEARER\nsasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId= \" <oauth-client-id> \" clientSecret= \" <oauth-client-secret> \" ;\nsasl.oauthbearer.token.endpoint.url=https://auth.example.com/oauth2/token\nsasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler"
One-way TLS
Mutual TLS (mTLS)
Server authentication only. YAML file
Environment variables
clusters :
- id : 'ssl-oneway'
name : 'SSL One-way TLS'
bootstrapServers : 'broker.example.com:9093'
properties : |
security.protocol=SSL
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----
CDK_CLUSTERS_0_ID: 'ssl-oneway'
CDK_CLUSTERS_0_NAME: 'SSL One-way TLS'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker.example.com:9093'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SSL\nssl.truststore.type=PEM\nssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----"
Client and server authentication. YAML file
Environment variables
clusters :
- id : 'ssl-mtls'
name : 'SSL Mutual TLS'
bootstrapServers : 'broker.example.com:9093'
properties : |
security.protocol=SSL
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----
ssl.keystore.type=PEM
ssl.keystore.key=-----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----
ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----
CDK_CLUSTERS_0_ID: 'ssl-mtls'
CDK_CLUSTERS_0_NAME: 'SSL Mutual TLS'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'broker.example.com:9093'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SSL\nssl.truststore.type=PEM\nssl.truststore.certificates=-----BEGIN CERTIFICATE----- <ca-certificate> -----END CERTIFICATE-----\nssl.keystore.type=PEM\nssl.keystore.key=-----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----\nssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----"
Environment credentials
Explicit credentials
Credentials inherited from environment. YAML file
Environment variables
clusters :
- id : 'aws-msk-iam'
name : 'AWS MSK with IAM'
bootstrapServers : 'b-1.cluster.kafka.region.amazonaws.com:9098'
properties : |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
CDK_CLUSTERS_0_ID: 'aws-msk-iam'
CDK_CLUSTERS_0_NAME: 'AWS MSK with IAM'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'b-1.cluster.kafka.region.amazonaws.com:9098'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SASL_SSL\nsasl.mechanism=AWS_MSK_IAM\nsasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;\nsasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler"
You can also override the default profile or role:
Profile: sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<aws-profile-name>";
Role: sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required awsRoleArn="<aws-role-arn>";
Credentials from explicit configuration. YAML file
Environment variables
clusters :
- id : 'aws-msk-iam'
name : 'AWS MSK with IAM'
bootstrapServers : 'b-1.cluster.kafka.region.amazonaws.com:9098'
properties : |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<aws-access-key-id>
aws_secret_access_key=<aws-secret-access-key>
CDK_CLUSTERS_0_ID: 'aws-msk-iam'
CDK_CLUSTERS_0_NAME: 'AWS MSK with IAM'
CDK_CLUSTERS_0_BOOTSTRAPSERVERS: 'b-1.cluster.kafka.region.amazonaws.com:9098'
CDK_CLUSTERS_0_PROPERTIES: "security.protocol=SASL_SSL\nsasl.mechanism=AWS_MSK_IAM\nsasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;\nsasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler\naws_access_key_id=<aws-access-key-id>\naws_secret_access_key=<aws-secret-access-key>"
Schema Registry configuration
To enable Schema Registry support, attach these code examples to any of the cluster configurations above.
YAML file
Environment variables
schemaRegistry :
url : 'https://psrc-xxxx.region.aws.confluent.cloud'
CDK_CLUSTERS_0_SCHEMAREGISTRY_URL: 'https://psrc-xxxx.region.aws.confluent.cloud'
YAML file
Environment variables
schemaRegistry :
url : 'https://psrc-xxxxx.region.aws.confluent.cloud'
security :
username : '<sr-api-key>'
password : '<sr-api-secret>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_URL: 'https://psrc-xxxxx.region.aws.confluent.cloud'
CDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_USERNAME: '<sr-api-key>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_PASSWORD: '<sr-api-secret>'
YAML file
Environment variables
schemaRegistry :
url : 'https://psrc-xxxxx.region.aws.confluent.cloud'
security :
token : '<bearer-token>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_URL: 'https://psrc-xxxxx.region.aws.confluent.cloud'
CDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_TOKEN: '<bearer-token>'
YAML file
Environment variables
schemaRegistry :
url : 'https://psrc-xxxxx.region.aws.confluent.cloud'
security :
key : -----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----
certificateChain : -----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----
CDK_CLUSTERS_0_SCHEMAREGISTRY_URL: 'https://psrc-xxxxx.region.aws.confluent.cloud'
CDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_KEY: '-----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----'
CDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_CERTIFICATECHAIN: '-----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----'
Connect MSK clusters with AWS Glue Schema Registry using different authentication methods. Explicit credentials
IAM profile
IAM role
YAML file
Environment variables
schemaRegistry :
region : '<aws-region>'
registryName : '<registry-name>' # optional
amazonSecurity :
type : 'Credentials'
accessKeyId : '<access-key-id>'
secretKey : '<secret-key>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_REGION: '<aws-region>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_REGISTRYNAME: '<registry-name>' # optional
CDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_TYPE: 'Credentials'
CDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_ACCESSKEYID: '<access-key-id>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_SECRETKEY: '<secret-key>'
YAML file
Environment variables
schemaRegistry :
region : '<aws-region>'
registryName : '<registry-name>' # optional
amazonSecurity :
type : 'FromContext'
profile : '<aws-profile-name>' # optional, inherited from environment by default
CDK_CLUSTERS_0_SCHEMAREGISTRY_REGION: '<aws-region>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_TYPE: 'FromContext'
CDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_PROFILE: '<aws-profile-name>' # optional, inherited from environment by default
YAML file
Environment variables
schemaRegistry :
region : '<aws-region>'
registryName : '<registry-name>' # optional
amazonSecurity :
type : 'FromRole'
role : '<aws-role-arn>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_REGION: '<aws-region>'
CDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_TYPE: 'FromRole'
CDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_ROLE: '<aws-role-arn>'
Kafka Connect configuration
To add Kafka Connect to your cluster configuration use the code examples below.
YAML file
Environment variables
kafkaConnects :
- id : 'kafka-connect'
name : 'My Kafka Connect'
url : 'http://localhost:8083'
headers : 'myHeader=myValue'
ignoreUntrustedCertificate : false
CDK_CLUSTERS_0_KAFKACONNECTS_0_ID: 'kafka-connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_NAME: 'My Kafka Connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_URL: 'http://localhost:8083'
CDK_CLUSTERS_0_KAFKACONNECTS_0_HEADERS: 'myHeader=myValue'
CDK_CLUSTERS_0_KAFKACONNECTS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
YAML file
Environment variables
kafkaConnects :
- id : 'kafka-connect'
name : 'My Kafka Connect'
url : 'http://localhost:8083'
headers : 'myHeader=myValue'
ignoreUntrustedCertificate : false
security :
username : '<connect-username>'
password : '<connect-password>'
CDK_CLUSTERS_0_KAFKACONNECTS_0_ID: 'kafka-connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_NAME: 'My Kafka Connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_URL: 'http://localhost:8083'
CDK_CLUSTERS_0_KAFKACONNECTS_0_HEADERS: 'myHeader=myValue'
CDK_CLUSTERS_0_KAFKACONNECTS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
CDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_USERNAME: '<connect-username>'
CDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_PASSWORD: '<connect-password>'
YAML file
Environment variables
kafkaConnects :
- id : 'kafka-connect'
name : 'My Kafka Connect'
url : 'http://localhost:8083'
headers : 'myHeader=myValue'
ignoreUntrustedCertificate : false
security :
token : '<bearer-token>'
CDK_CLUSTERS_0_KAFKACONNECTS_0_ID: 'kafka-connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_NAME: 'My Kafka Connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_URL: 'http://localhost:8083'
CDK_CLUSTERS_0_KAFKACONNECTS_0_HEADERS: 'myHeader=myValue'
CDK_CLUSTERS_0_KAFKACONNECTS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
CDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_TOKEN: '<bearer-token>'
YAML file
Environment variables
kafkaConnects :
- id : 'kafka-connect'
name : 'My Kafka Connect'
url : 'http://localhost:8083'
headers : 'myHeader=myValue'
ignoreUntrustedCertificate : false
security :
key : -----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----
certificateChain : -----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----
CDK_CLUSTERS_0_KAFKACONNECTS_0_ID: 'kafka-connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_NAME: 'My Kafka Connect'
CDK_CLUSTERS_0_KAFKACONNECTS_0_URL: 'http://localhost:8083'
CDK_CLUSTERS_0_KAFKACONNECTS_0_HEADERS: 'myHeader=myValue'
CDK_CLUSTERS_0_KAFKACONNECTS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
CDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_KEY: '-----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----'
CDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_CERTIFICATECHAIN: '-----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----'
ksqlDB configuration
OAUTHBEARER with OIDC Authentication is possible since Kafka 3.1 and KIP-768 . To demonstrate OIDC authentication, you can connect to NASA’s GCN Kafka cluster after you sign up . Here’s a configuration example (adapt the values to your needs):
YAML file
Environment variables
ksqlDBs :
- id : 'ksqldb-basic'
name : 'My ksqlDB Server'
url : 'http://localhost:8088'
ignoreUntrustedCertificate : false
headers : 'myHeader=myValue'
CDK_CLUSTERS_0_KSQLDBS_0_ID: 'ksqldb-basic'
CDK_CLUSTERS_0_KSQLDBS_0_NAME: 'My ksqlDB Server'
CDK_CLUSTERS_0_KSQLDBS_0_URL: 'http://localhost:8088'
CDK_CLUSTERS_0_KSQLDBS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
CDK_CLUSTERS_0_KSQLDBS_0_HEADERS: 'myHeader=myValue'
YAML file
Environment variables
ksqlDBs :
- id : 'ksqldb-basic-auth'
name : 'My ksqlDB Server'
url : 'http://localhost:8088'
ignoreUntrustedCertificate : false
headers : 'myHeader=myValue'
security :
username : '<ksqldb-username>'
password : '<ksqldb-password>'
CDK_CLUSTERS_0_KSQLDBS_0_ID: 'ksqldb-basic-auth'
CDK_CLUSTERS_0_KSQLDBS_0_NAME: 'My ksqlDB Server'
CDK_CLUSTERS_0_KSQLDBS_0_URL: 'http://localhost:8088'
CDK_CLUSTERS_0_KSQLDBS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
CDK_CLUSTERS_0_KSQLDBS_0_HEADERS: 'myHeader=myValue'
CDK_CLUSTERS_0_KSQLDBS_0_SECURITY_USERNAME: '<ksqldb-username>'
CDK_CLUSTERS_0_KSQLDBS_0_SECURITY_PASSWORD: '<ksqldb-password>'
YAML file
Environment variables
ksqlDBs :
- id : 'ksqldb-token-auth'
name : 'My ksqlDB Server'
url : 'http://localhost:8088'
ignoreUntrustedCertificate : false
headers : 'myHeader=myValue'
security :
token : '<bearer-token>'
CDK_CLUSTERS_0_KSQLDBS_0_ID: 'ksqldb-token-auth'
CDK_CLUSTERS_0_KSQLDBS_0_NAME: 'My ksqlDB Server'
CDK_CLUSTERS_0_KSQLDBS_0_URL: 'http://localhost:8088'
CDK_CLUSTERS_0_KSQLDBS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
CDK_CLUSTERS_0_KSQLDBS_0_HEADERS: 'myHeader=myValue'
CDK_CLUSTERS_0_KSQLDBS_0_SECURITY_TOKEN: '<bearer-token>'
YAML file
Environment variables
ksqlDBs :
- id : 'ksqldb-ssl-auth'
name : 'My ksqlDB Server'
url : 'http://localhost:8088'
ignoreUntrustedCertificate : false
headers : 'myHeader=myValue'
security :
key : -----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----
certificateChain : -----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----
CDK_CLUSTERS_0_KSQLDBS_0_ID: 'ksqldb-ssl-auth'
CDK_CLUSTERS_0_KSQLDBS_0_NAME: 'My ksqlDB Server'
CDK_CLUSTERS_0_KSQLDBS_0_URL: 'http://localhost:8088'
CDK_CLUSTERS_0_KSQLDBS_0_IGNOREUNTRUSTEDCERTIFICATE: 'false'
CDK_CLUSTERS_0_KSQLDBS_0_HEADERS: 'myHeader=myValue'
CDK_CLUSTERS_0_KSQLDBS_0_SECURITY_KEY: '-----BEGIN PRIVATE KEY----- <client-key> -----END PRIVATE KEY-----'
CDK_CLUSTERS_0_KSQLDBS_0_SECURITY_CERTIFICATECHAIN: '-----BEGIN CERTIFICATE----- <client-certificate> -----END CERTIFICATE-----'
Provider configuration
To enable enhanced provider-specific capabilities, attach the following snippets to any of the above cluster configurations.
Confluent Cloud flavor
Aiven Cloud flavor
Gateway flavor
Connect to Confluent Cloud with enhanced management capabilities for service accounts, API keys, and ACLs. YAML file
Environment variables
kafkaFlavor :
type : "Confluent"
key : "<cloud-api-key>" # Confluent Cloud API Key, NOT cluster API Key
secret : "<cloud-api-secret>" # Confluent Cloud API Secret, NOT cluster API Secret
confluentEnvironmentId : "<environment-id>"
confluentClusterId : "<cluster-id>"
organizationId : "<organization-id>" # Optional - Required for RBAC role bindings
schemaRegistryId : "<schema-registry-id>" # Optional - Required if managing Schema Registry via API
enableRbacRoleBindings : true # Optional - Default: false
CDK_CLUSTERS_0_KAFKAFLAVOR_TYPE: "Confluent"
CDK_CLUSTERS_0_KAFKAFLAVOR_KEY: "<cloud-api-key>"
CDK_CLUSTERS_0_KAFKAFLAVOR_SECRET: "<cloud-api-secret>"
CDK_CLUSTERS_0_KAFKAFLAVOR_CONFLUENTENVIRONMENTID: "<environment-id>"
CDK_CLUSTERS_0_KAFKAFLAVOR_CONFLUENTCLUSTERID: "<cluster-id>"
CDK_CLUSTERS_0_KAFKAFLAVOR_ORGANIZATIONID: "<organization-id>" # Optional
CDK_CLUSTERS_0_KAFKAFLAVOR_SCHEMAREGISTRYID: "<schema-registry-id>" # Optional
CDK_CLUSTERS_0_KAFKAFLAVOR_ENABLERBACROLEBINDINGS: "true" # Optional
Connect to Aiven Kafka with enhanced management capabilities for service accounts and ACLs. YAML file
Environment variables
kafkaFlavor :
type : "Aiven"
apiToken : "<api-token>"
project : "<project-name>"
serviceName : "kafka-18350d67" # kafka cluster id (service name)
CDK_CLUSTERS_0_KAFKAFLAVOR_TYPE: "Aiven"
CDK_CLUSTERS_0_KAFKAFLAVOR_APITOKEN: "<api-token>"
CDK_CLUSTERS_0_KAFKAFLAVOR_PROJECT: "<project-name>"
CDK_CLUSTERS_0_KAFKAFLAVOR_SERVICENAME: "kafka-18350d67"
Connect Console to Conduktor Gateway for centralized Interceptor management through the Console UI. YAML file
Environment variables
kafkaFlavor :
type : "Gateway"
url : "http://conduktor-gateway:8888"
user : "<gateway-username>"
password : "<gateway-password>"
virtualCluster : "passthrough"
CDK_CLUSTERS_0_KAFKAFLAVOR_TYPE: "Gateway"
CDK_CLUSTERS_0_KAFKAFLAVOR_URL: "http://conduktor-gateway:8888"
CDK_CLUSTERS_0_KAFKAFLAVOR_USER: "<gateway-username>"
CDK_CLUSTERS_0_KAFKAFLAVOR_PASSWORD: "<gateway-password>"
CDK_CLUSTERS_0_KAFKAFLAVOR_VIRTUALCLUSTER: "passthrough"
Logging configuration
Environment variables
Config file
Global log settings Configure Console-wide logging behavior using these environment variables:
Environment variable Default value Description
CDK_ROOT_LOG_LEVELINFOGlobal Console log level, one of OFF, ERROR, WARN, INFO, DEBUG CDK_ROOT_LOG_FORMATTEXTLog format, one of TEXT or JSON CDK_ROOT_LOG_COLORtrueEnable color in logs when possible
For backward compatibility, CDK_DEBUG: true is still supported and is equivalent to CDK_ROOT_LOG_LEVEL: DEBUG.
Module-specific log settings Configure logging levels for individual Console modules: Possible values for all of them are: OFF, ERROR, WARN, INFO, DEBUG, and TRACE.
Environment variable Default value Description
PLATFORM_STARTUP_LOG_LEVELINFOSet the setup/configuration process logs level. By default, it is set to INFO, but switches to DEBUG if CDK_ROOT_LOG_LEVEL: DEBUG. CONSOLE_ROOT_LOG_LEVELCDK_ROOT_LOG_LEVELLogs related to any actions done in the Console UI PLATFORM_API_ROOT_LOG_LEVELCDK_ROOT_LOG_LEVELInternal platform API logs (health endpoints)
Log level inheritance If you don’t explicitly set the log level for a module, it will inherit the CDK_ROOT_LOG_LEVEL. For instance, if you only set CDK_ROOT_LOG_LEVEL : DEBUG
# CONSOLE_ROOT_LOG_LEVEL isn't set
Then, CONSOLE_ROOT_LOG_LEVEL will be automatically set to DEBUG. Similarly, if you set: CDK_ROOT_LOG_LEVEL : INFO
CONSOLE_ROOT_LOG_LEVEL : DEBUG
Then, CONSOLE_ROOT_LOG_LEVEL will still be set to DEBUG, and isn’t overridden. If you want to further customize your logging at an individual logger-level, you can use a per-module logback configuration file. By default, all logback configuration files are in /opt/conduktor/loggers/ with READ-ONLY permissions. At startup, Console will copy all (missing) logback configuration files from /opt/conduktor/loggers/ to /var/conduktor/configs/loggers/ directory with READ-WRITE permissions. Because all logback configuration files are set to reload themselves every 15 seconds, you can then edit them inside the container volume in /var/conduktor/configs/loggers/ to tune log level per logger. All logback configuration files declare some expected appenders:
Appender name Description
STDOUTAppender that writes logs to stdout STDOUT_COLORAppender that writes logs to stdout with color ASYNC_STDOUTAppender that writes logs to stdout asynchronously ASYNC_STDOUT_COLORAppender that writes logs to stdout asynchronously with color
JSON structured logging
Enable structured logging by setting CDK_ROOT_LOG_FORMAT=JSON. Logs will use this JSON format:
{
"timestamp" : "2024-06-14T10:09:25.802542476+00:00" ,
"level" : "<log level>" ,
"message" : "<log message>" ,
"logger" : "<logger name>" ,
"thread" : "<logger thread>" ,
"stack_trace" : "<throwable>" ,
"mdc" : {
"key" : "value"
}
}
The log timestamp is encoded in ISO-8601 format . When structured logging is enabled, CDK_ROOT_LOG_COLOR is always ignored.
Runtime logger API
Console provides runtime log level management via REST API . This requires an admin API key .
The loggerName filter used to GET or SET a logger level uses a contains so you can either use the fully qualified cardinal name or just a part of it, meaning that the filter authenticator will match io.conduktor.authenticator or io.conduktor.authenticator.ConduktorUserProfile loggers, among others. The logLevel is case-insensitive and can be: TRACE, DEBUG, INFO, WARN, ERROR, OFF.