There are different options for configuring Conduktor Console. You can use:
- a YAML configuration file
- environment variables
- our API for some configurations (such as Kafka cluster configuration)
- the UI (e.g. to configure clusters, go to Settings > CLusters page)
GitOps: Manage clusters
If you want to configure clusters with a GitOps approach, we recommend using Console API.
Note that from Console v1.19, if you’re configuring clusters through the YAML file, this will act as the source of truth for cluster definition. This means that if you make changes to the cluster via the UI, they will be overridden on the next restart containing a reference to your configuration file.
However, if you’ve created your cluster configurations using the Console UI, they will not be impacted by a restart. Removing the YAML block entirely will not remove existing clusters from the UI.
Complete configuration example
This demonstrates a complete configuration for Conduktor Console including database, monitoring, authentication and cluster connections.
YAML file
Environment variables
database:
hosts:
- host: 'postgresql'
port: 5432
name: 'conduktor'
username: 'conduktor'
password: 'change_me'
connection_timeout: 30 # in seconds
monitoring:
cortex-url: 'http://conduktor-monitoring:9009/'
alert-manager-url: 'http://conduktor-monitoring:9010/'
callback-url: 'http://conduktor-console:8080/monitoring/api/'
notifications-callback-url: 'http://localhost:8080'
admin:
email: 'admin@your_company.io'
password: "MySecur3P@ssw0rd!" # Must be at least 8 characters with mixed case, numbers, and symbols
sso:
oauth2:
- name: 'auth0'
client-id: '<client-id>'
client-secret: '<client-secret>'
openid:
issuer: 'https://<your-domain>'
clusters:
- id: 'confluent-prod'
name: 'Confluent Production'
bootstrapServers: 'pkc-xxxxx.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<api-key>" password="<api-secret>";
schemaRegistry:
url: 'https://psrc-xxxxx.eu-central-1.aws.confluent.cloud'
security:
username: '<sr-api-key>'
password: '<sr-api-secret>'
license: "<your-license-key>" # Enterprise license key
Plain auth example
Connect to a local cluster with no auth/encryption. For example, a local dev Kafka.
YAML file
Environment variables
clusters:
- id: 'local'
name: 'Local Kafka Cluster'
bootstrapServers: 'localhost:9092'
Plain auth with schema registry
Connect to a local cluster with schema registry.
YAML file
Environment variables
clusters:
- id: 'local'
name: 'Local Kafka Cluster'
bootstrapServers: 'localhost:9092'
schemaRegistry:
url: 'http://localhost:8081'
Kafka Connect
Cluster with Kafka Connect configured with basic authentication.
YAML file
Environment variables
- id: 'kafka'
name: 'Kafka'
bootstrapServers: 'localhost:9092'
kafkaConnects:
- id: 'kafka-connect'
name: 'My Kafka Connect'
url: 'http://localhost:8083'
security:
username: '<username>'
password: '<password>'
Amazon MSK with IAM authentication
Connect to an MSK cluster with IAM authentication. You can use explicit credentials or inherit them from the environment.
Using explicit credentials:
YAML file
Environment variables
clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
Using environment credentials:
YAML file
Environment variables
clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
You can also override either the default
profile or role.
Override profile
Override role
sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="other-profile";
With AWS Glue schema registry:
You can connect MSK clusters with AWS Glue Schema Registry using different authentication methods.
Using explicit credentials:
YAML file
Environment variables
clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
schemaRegistry:
region: '<aws-region>'
security:
type: 'Credentials'
accessKeyId: '<access-key-id>'
secretKey: '<secret-key>'
registryName: '<registry-name>' # Optional: specify registry name
Using default credentials chain:
YAML file
Environment variables
clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
schemaRegistry:
region: '<aws-region>'
security:
type: 'FromContext'
profile: '<profile>' # optional
Using IAM role:
YAML file
Environment variables
clusters:
- id: amazon-msk-iam
name: Amazon MSK IAM
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
schemaRegistry:
region: '<aws-region>'
security:
type: 'FromRole'
role: '<role>'
Confluent Cloud basic connection
Connect to a Confluent Cloud cluster using API keys.
YAML file
Environment variables
clusters:
- id: 'confluent-pkc'
name: 'Confluent Prod'
bootstrapServers: 'pkc-lzoyy.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<username>" password="<password>";
Confluent Cloud with schema registry
YAML file
Environment variables
clusters:
- id: 'confluent-pkc'
name: 'Confluent Prod'
bootstrapServers: 'pkc-lq8v7.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<username>" password="<password>";
schemaRegistry:
url: 'https://psrc-o268o.eu-central-1.aws.confluent.cloud'
security:
username: '<username>'
password: '<password>'
Confluent Cloud with service account management
Connect to a Confluent Cloud cluster and configure additional properties to manage service accounts, API keys and ACLs.
YAML file
Environment variables
- id: 'confluent-pkc'
name: 'Confluent Prod'
bootstrapServers: 'pkc-lq8v7.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<username>" password="<password>";
kafkaFlavor:
type: "Confluent"
key: "<api_key>" # Confluent Cloud API Key, NOT cluster API Key
secret: "<api_secret>" # Confluent Cloud API Secret, NOT cluster API Secret
confluentEnvironmentId: "<env_id>"
confluentClusterId: "<cluster_id>"
Aiven with SSL certificate (truststore)
You can use the PEM formatted files (.pem or .cer) directly by providing the CA certificate inline. Make sure the certificate is on one single line.
YAML file
Environment variables
- id: aiven
name: My Aiven Cluster
bootstrapServers: 'kafka-09ba.aivencloud.com:21661'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="<username>" password="<password>";
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <YOUR CA CERTIFICATE> -----END CERTIFICATE-----
Aiven with two-way SSL (keystore and truststore)
You should have three files:
- Your access key (in the keystore.jks file).
- Your access certificate (in the keystore.jks file).
- Your CA certificate (in the truststore.jks file).
Ensure the content is on a single line.
YAML file
Environment variables
- id: 'aiven-ssl'
name: 'Aiven SSL'
bootstrapServers: 'kafka-09ba.aivencloud.com:21650'
properties: |
security.protocol=SSL
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <YOUR CA CERTIFICATE> -----END CERTIFICATE-----
ssl.keystore.type=PEM
ssl.keystore.key=-----BEGIN PRIVATE KEY----- <YOUR ACCES KEY> -----END PRIVATE KEY-----
ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- <YOUR ACCESS CERTIFICATE> -----END CERTIFICATE-----
Aiven with service account management
Connect to an Aiven cluster and configure additional properties to manage service accounts and ACLs.
YAML file
Environment variables
- id: 'aiven-09ba'
name: 'Aiven Prod'
bootstrapServers: 'kafka-09ba.aivencloud.com:21661'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="<username>" password="<password>";
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <YOUR CA CERTIFICATE> -----END CERTIFICATE-----
kafkaFlavor:
type: "Aiven"
apiToken: "<api_token>"
project: "<project>"
serviceName: "kafka-18350d67" # kafka cluster id (service name)
SASL/OAUTHBEARER with OIDC
OAUTHBEARER with OIDC Authentication is possible since Kafka 3.1 and KIP-768. To demonstrate OIDC authentication, you can connect to NASA’s GCN Kafka cluster after you sign up. Here’s a configuration example (adapt the values to your needs):
YAML file
Environment variables
clusters:
- id: 'nasa'
name: 'GCN NASA Kafka'
bootstrapServers: 'kafka.gcn.nasa.gov:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
clientId="<YOUR_CLIENT_ID>" \
clientSecret="<YOUR_CLIENT_SECRET>";
sasl.oauthbearer.token.endpoint.url=https://auth.gcn.nasa.gov/oauth2/token
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
Conduktor Gateway
Connect to your Conduktor Gateway using the Gateway ‘flavor’ to manage Gateway Interceptors through Console’s UI.
YAML file
Environment variables
clusters:
- id: 'gateway-cluster'
name: 'My Gateway cluster'
bootstrapServers: 'conduktor-gateway:9092'
kafkaFlavor:
type: "Gateway"
url: "http://conduktor-gateway:8888"
user: "admin"
password: "conduktor"
virtualCluster: "passthrough"
Environment variables
Config file
Console-wide log configuration
To configure Conduktor Console logs globally, you can use the following environment variables:Environment variable | Default value | |
---|
CDK_ROOT_LOG_LEVEL | INFO | Global Console log level, one of OFF , ERROR , WARN , INFO , DEBUG |
CDK_ROOT_LOG_FORMAT | TEXT | Log format, one of TEXT or JSON (sice 1.26.0) |
CDK_ROOT_LOG_COLOR | true | Enable color in logs when possible |
For backward compatibility, CDK_DEBUG: true
is still supported and is equivalent to CDK_ROOT_LOG_LEVEL: DEBUG
.
Per module log configuration
To configure Conduktor Console logs on a per-module basis, you can use the environment variables detailed below.Possible values for all of them are: OFF
, ERROR
, WARN
, INFO
, DEBUG
, and TRACE
.Environment variable | Default value | Description | |
---|
PLATFORM_STARTUP_LOG_LEVEL | INFO | Set the setup/configuration process logs level. By default, it is set to INFO , but switches to DEBUG if CDK_ROOT_LOG_LEVEL: DEBUG . | |
CONSOLE_ROOT_LOG_LEVEL | CDK_ROOT_LOG_LEVEL | Logs related to any actions done in the Console UI | |
PLATFORM_API_ROOT_LOG_LEVEL | CDK_ROOT_LOG_LEVEL | Internal platform API logs (health endpoints) | |
Log level inheritance
If you don’t explicitly set the log level for a module, it will inherit the CDK_ROOT_LOG_LEVEL
.For instance, if you only setCDK_ROOT_LOG_LEVEL: DEBUG
# CONSOLE_ROOT_LOG_LEVEL isn't set
Then, CONSOLE_ROOT_LOG_LEVEL
will be automatically set to DEBUG
.Similarly, if you set:CDK_ROOT_LOG_LEVEL: INFO
CONSOLE_ROOT_LOG_LEVEL: DEBUG
Then, CONSOLE_ROOT_LOG_LEVEL
will still be set to DEBUG
, and isn’t overridden.
Structured logging (JSON)
To enable structured logging, simply set CDK_ROOT_LOG_FORMAT=JSON
. The logs will be structured using the following format:
{
"timestamp": "2024-06-14T10:09:25.802542476+00:00",
"level": "<log level>",
"message": "<log message>",
"logger": "<logger name>",
"thread": "<logger thread>",
"stack_trace": "<throwable>",
"mdc": {
"key": "value"
}
}
The log timestamp
is encoded in ISO-8601 format. When structured logging is enabled, CDK_ROOT_LOG_COLOR
is always ignored.
Runtime logger configuration API
From version 1.28.0, Conduktor Console exposes an API to change the log level of a logger at runtime. This API requires admin privileges and is available on /api/public/debug/v1/loggers
.
Get all loggers and their log level
GET /api/public/debug/v1/loggers
:
curl -X GET 'http://localhost:8080/api/public/debug/v1/loggers' \
-H "Authorization: Bearer $API_KEY" | jq .
That will output :
[
{
"name": "io",
"level": "INFO"
},
{
"name": "io.conduktor",
"level": "INFO"
},
{
"name": "io.conduktor.authenticator",
"level": "INFO"
},
{
"name": "io.conduktor.authenticator.ConduktorUserProfile",
"level": "INFO"
},
{
"name": "org",
"level": "INFO"
},
{
"name": "org.apache",
"level": "INFO"
},
{
"name": "org.apache.avro",
"level": "INFO"
},
...
]
Get a specific logger and its log level
GET /api/public/debug/v1/loggers/{loggerName}
:
curl -X GET 'http://localhost:8080/api/public/debug/v1/loggers/io.conduktor.authenticator' \
-H "Authorization: Bearer $API_KEY" | jq .
That will output :
[
{
"name": "io.conduktor.authenticator",
"level": "INFO"
},
{
"name": "io.conduktor.authenticator.ConduktorUserProfile",
"level": "INFO"
}
...
]
The loggerName
filter uses a contains so you can either use the fully qualified cardinal name or just a part of it, meaning that the filter authenticator
will match io.conduktor.authenticator
and io.conduktor.authenticator.ConduktorUserProfile
loggers.
Set a specific logger log level
PUT /api/public/debug/v1/loggers/{loggerName}/{logLevel}
:
curl -X PUT 'http://localhost:8080/api/public/debug/v1/loggers/io.conduktor.authenticator/DEBUG' \
-H "Authorization: Bearer $API_KEY" | jq .
That will output the list of loggers impacted by the update:
[
"io.conduktor.authenticator",
"io.conduktor.authenticator.ConduktorUserProfile"
...
]
Like the GET
endpoint, the loggerName
filter use a contains so you can either use the fully qualified cardinal name or just a part of it. The logLevel
is case-insensitive and can be: TRACE
, DEBUG
, INFO
, WARN
, ERROR
, OFF
.
Set multiple loggers log level
PUT /api/public/debug/v1/loggers
:
curl -X PUT 'http://localhost:8080/public/debug/v1/loggers' \
-H "Authorization: Bearer $API_KEY" \
--data '[
{
"name": "io.conduktor.authenticator.ConduktorUserProfile",
"level": "TRACE"
},
{
"name": "io.conduktor.authenticator.adapter",
"level": "DEBUG"
}
]' | jq .
That will output the list of loggers impacted by the update:
[
"io.conduktor.authenticator.ConduktorUserProfile",
"io.conduktor.authenticator.ConduktorUserProfile$LocalUserProfile",
"io.conduktor.authenticator.adapter",
"io.conduktor.authenticator.adapter.Http4sCacheSessionStore",
...
]