Skip to main content
Quick navigation

Configuration Snippets

The Conduktor Console can be configured using a YAML configuration file or through environment variables. The full list of configurable properties can be found here. Note that it's also possible to make some configurations (such as Kafka cluster configuration) through our API.

Note you can also configure your clusters within the Admin section of Console, whereby you can also upload certificates using the certificate store.

GitOps: Managing Cluster Configurations

tip

Our recommendation is to use the Console API if you wish to configure clusters with a GitOps approach.

Note that from Console version 1.19, if you are configuring clusters through the YAML file, this will act as the source of truth for cluster definition. Meaning, if you make changes to the cluster using the UI, they will be overridden on the next restart containing a reference to your configuration file.

If you've created your cluster configurations from within the Console UI, they will not be impacted by a restart. Removing the YAML block entirely will not remove existing clusters from the UI.

Configuration Examples

The below outlines reusable snippets for common configurations such as:

Complete Configuration Example

This demonstrates a complete configuration for Conduktor Enterprise consisting of one Kafka cluster with Schema Registry, SSO and license key.

For identity provider specific guides see configuring SSO. Note that if you don't have an Enterprise license, you should omit the SSO configuration and use local users instead.

organization:
name: "conduktor"

database:
hosts:
- host: 'postgresql'
port: 5432
name: 'conduktor'
username: 'conduktor'
password: 'change_me'
connection_timeout: 30 # in seconds

monitoring:
cortex-url: 'http://conduktor-monitoring:9009/'
alert-manager-url: 'http://conduktor-monitoring:9010/'
callback-url: 'http://conduktor-console:8080/monitoring/api/'
notifications-callback-url: 'http://localhost:8080'

admin:
email: 'name@your_company.io'
password: "admin"

sso:
oauth2:
- name: 'auth0'
client-id: '<client ID>'
client-secret: '<client secret>'
openid:
issuer: 'https://<domain>'

clusters:
- id: 'confluent-pkc'
name: 'Confluent Prod'
bootstrapServers: 'pkc-lq8v7.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<username>" password="<password>";
schemaRegistry:
id: 'confluent-sr'
url: 'https://psrc-o268o.eu-central-1.aws.confluent.cloud'
security:
username: 'user'
password: 'password'

license: "" # license key if Enterprise

Plain Auth Example

Connect to a local cluster with no auth/encryption, for example a local development Kafka.

clusters:
- id: 'local'
name: 'Local Kafka Cluster'
bootstrapServers: 'localhost:9092'

Plain Auth With Schema Registry

Connect to a local cluster with schema registry.

clusters:
- id: 'local'
name: 'Local Kafka Cluster'
bootstrapServers: 'localhost:9092'
schemaRegistry:
id: 'local-sr'
url: 'http://localhost:8081'

Kafka Connect

Cluster with Kafka Connect configured with Basic Auth

- id: 'kafka'
name: 'Kafka'
bootstrapServers: 'localhost:9092'
kafkaConnects:
- id: 'kafka-connect'
name: 'My Kafka Connect'
url: 'http://localhost:8083'
security:
username: '<username>'
password: '<password>'

Amazon MSK with IAM Authentication Example

Connect to an MSK cluster with IAM Authentication using AWS Access Key and Secret.

Billing note

Note that deploying this CloudFormation template into your environment will result in billable resources being consumed. See Amazon MSK pricing for more information.

clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>

Connect to an MSK cluster with IAM credentials inherited from environment.

clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler

On top of that, you can override either the default profile or the role to assume.

sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="other-profile";

Amazon MSK with Glue Schema Registry

Connect to an MSK cluster with schema registry using credentials.

clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
schemaRegistry:
region: '<aws-region>'
security:
type: 'Credentials'
accessKeyId: '<access-key-id>'
secretKey: '<secret-key>'

Connect to an MSK cluster with schema registry using the default chain of credentials providers.

clusters:
- id: 'amazon-msk-iam'
name: 'Amazon MSK IAM'
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
schemaRegistry:
region: '<aws-region>'
security:
type: 'FromContext'
profile: '<profile>' # optional to use the default profile

Connect to an MSK cluster with schema registry using a specific role.

clusters:
- id: amazon-msk-iam
name: Amazon MSK IAM
color: #FF9900
bootstrapServers: 'b-3-public.****.kafka.eu-west-1.amazonaws.com:9198'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=io.conduktor.aws.IAMClientCallbackHandler
aws_access_key_id=<access-key-id>
aws_secret_access_key=<secret-access-key>
schemaRegistry:
region: '<aws-region>'
security:
type: 'FromRole'
role: '<role>'

On top of that, and for all these previous configuration example, you can add a registryName to the schemaRegistry section to use a specific registry for this cluster.

schemaRegistry:
region: '<aws-region>'
security:
type: 'Credentials'
accessKeyId: '<access-key-id>'
secretKey: '<secret-key>'
registryName: '<registry-name>'

Confluent Cloud Example

Connect to a Confluent cloud cluster using API keys.

clusters:
- id: 'confluent-pkc'
name: 'Confluent Prod'
bootstrapServers: 'pkc-lzoyy.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<username>" password="<password>";

Confluent Cloud with Schema Registry

Connect to a Confluent cloud cluster with schema registry using basic auth.

- id: 'confluent-pkc'
name: 'Confluent Prod'
bootstrapServers: 'pkc-lq8v7.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<usernam>" password="<password>";
schemaRegistry:
id: 'confluent-sr'
url: 'https://psrc-o268o.eu-central-1.aws.confluent.cloud'
security:
username: '<username>'
password: '<password>'

Confluent Cloud with Service Account Management

Connect to a Confluent Cloud cluster and configure additional properties to manage Service Accounts, API Keys and ACLs.

- id: 'confluent-pkc'
name: 'Confluent Prod'
bootstrapServers: 'pkc-lq8v7.eu-central-1.aws.confluent.cloud:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<username>" password="<password>";
kafkaFlavor:
type: "Confluent"
key: "<api_key>" # Confluent Cloud API Key, NOT cluster API Key
secret: "<api_secret>" # Confluent Cloud API Secret, NOT cluster API Secret
confluentEnvironmentId: "<env_id>"
confluentClusterId: "<cluster_id>"

SSL Certificates Example - Aiven (truststore)

You can directly use the PEM formatted files (.pem or .cer) by providing the CA certificate inline. Please make sure the certificate is on one single line

- id: aiven
name: My Aiven Cluster
bootstrapServers: 'kafka-09ba.aivencloud.com:21661'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="<username>" password="<password>";
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <YOUR CA CERTIFICATE> -----END CERTIFICATE-----

2 Way SSL (keystore + truststore)

You should have 3 files, and generally they are embedded in 2 files:

  • Your access key (in the keystore.jks file)
  • Your access certificate (in the keystore.jks file)
  • Your CA certificate (in the truststore.jks file) Please make sure to have the content is on a single line
- id: 'aiven-ssl'
name: 'Aiven SSL'
bootstrapServers: 'kafka-09ba.aivencloud.com:21650'
properties: |
security.protocol=SSL
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <YOUR CA CERTIFICATE> -----END CERTIFICATE-----
ssl.keystore.type=PEM
ssl.keystore.key=-----BEGIN PRIVATE KEY----- <YOUR ACCES KEY> -----END PRIVATE KEY-----
ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- <YOUR ACCESS CERTIFICATE> -----END CERTIFICATE-----

Aiven with Service Account Management

Connect to an Aiven cluster and configure additional properties to manage Service Accounts and ACLs.

- id: 'aiven-09ba'
name: 'Aiven Prod'
bootstrapServers: 'kafka-09ba.aivencloud.com:21661'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="<username>" password="<password>";
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE----- <YOUR CA CERTIFICATE> -----END CERTIFICATE-----
kafkaFlavor:
type: "Aiven"
apiToken: "<api_token>"
project: "<project>"
serviceName: "kafka-18350d67" # kafka cluster id (service name)

Conduktor Gateway Virtual Cluster

Configure virtual clusters from your Conduktor Gateway deployment to manage interceptors within Console.

- id: 'kafka'
name: 'Kafka'
bootstrapServers: 'http://conduktor-proxy-internal:8888'
kafkaFlavor:
type: "Gateway"
url: "http://conduktor-proxy-internal:8888"
user: "admin"
password: "conduktor"
virtualCluster: "passthrough"

SASL/OAUTHBEARER and OIDC Kafka Cluster Example

OAUTHBEARER with OIDC Authentication is possible since Kafka 3.1 and KIP-768

To demonstrate OIDC authentication, NASA has a Kafka Cluster from which you can connect to after you Sign Up.
Here's a config example that works for their cluster.
Adapt the values to your needs for your own Kafka Cluster.

clusters:
- id: 'nasa'
name: 'GCN NASA Kafka'
bootstrapServers: 'kafka.gcn.nasa.gov:9092'
properties: |
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
clientId="<YOUR_CLIENT_ID>" \
clientSecret="<YOUR_CLIENT_SECRET>";
sasl.oauthbearer.token.endpoint.url=https://auth.gcn.nasa.gov/oauth2/token
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler