Configure Console

Conduktor Console can be configured using either a configuration file platform-config.yaml or environment variables. This is used to set up your organization’s environment. Configuration can be used to declare:
  • Organization name
  • External database (required)
  • User authentication (Basic or SSO)
  • Console license
We recommend using the Console UI (Settings > Clusters page) to configure Kafka cluster, schema registry and Kafka connect. This has several advantages over the YAML configuration:
  • Intuitive interface with live update capabilities
  • Centralized and secured with RBAC and audit logs events
  • Certificate store to help with custom certificates configuration (no more JKS files and volume mounts)
Check out the recommended deployment on GitHub.

Security considerations

  • The configuration file should be protected by file system permissions.
  • The database should have at-rest data encryption enabled on the data volume and have limited network connectivity.

Configuration file

platform-config.yaml
organization:
  name: demo

admin:
  email: admin@company.io
  password: admin

database:
  url: postgresql://conduktor:change_me@host:5432/conduktor
  # OR in a decomposed way
  # host: "host"
  # port: 5432
  # name: "conduktor"
  # username: "conduktor"
  # password: "change_me"
  # connection_timeout: 30 # in seconds

auth:
  local-users:
    - email: user@conduktor.io
      password: user

license: '<your license key>'

Bind file

The docker-compose below shows how to bind your platform-config.yaml file. You can alternatively use environment variables. The CDK_IN_CONF_FILE variable is used to indicate that a configuration file is being used and the location to find it.
docker-compose.yaml
services:  
  postgresql:
    image: postgres:14
    hostname: postgresql
    volumes:
      - pg_data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: "conduktor"
      POSTGRES_USER: "conduktor"
      POSTGRES_PASSWORD: "change_me"
      POSTGRES_HOST_AUTH_METHOD: "scram-sha-256"

  conduktor-console:
    image: conduktor/conduktor-console
    depends_on:
      - postgresql
    ports:
      - "8080:8080"
    volumes:
      - conduktor_data:/var/conduktor
      - type: bind
        source: "./platform-config.yaml"
        target: /opt/conduktor/platform-config.yaml
        read_only: true
    environment:
      CDK_IN_CONF_FILE: /opt/conduktor/platform-config.yaml
    healthcheck:
      test: curl -f http://localhost:8080/platform/api/modules/health/live || exit 1
      interval: 10s
      start_period: 10s
      timeout: 5s
      retries: 3

volumes:
  pg_data: {}
  conduktor_data: {}

Environment override

Input configuration fields can also be provided using environment variables. Here’s an example of docker-compose that uses environment variables for configuration:
"docker-compose.yaml
services:  
  postgresql:
    image: postgres:14
    hostname: postgresql
    volumes:
      - pg_data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: "conduktor"
      POSTGRES_USER: "conduktor"
      POSTGRES_PASSWORD: "change_me"
      POSTGRES_HOST_AUTH_METHOD: "scram-sha-256"

  conduktor-console:
    image: conduktor/conduktor-console
    depends_on:
      - postgresql
    ports:
      - "8080:8080"
    volumes:
      - conduktor_data:/var/conduktor
    healthcheck:
      test: curl -f http://localhost:8080/platform/api/modules/health/live || exit 1
      interval: 10s
      start_period: 10s
      timeout: 5s
      retries: 3
    environment:
      CDK_DATABASE_URL: "postgresql://conduktor:change_me@postgresql:5432/conduktor"
      CDK_LICENSE: "<your license key>"
      CDK_ORGANIZATION_NAME: "demo"
      CDK_ADMIN_EMAIL: "admin@company.io"
      CDK_ADMIN_PASSWORD: "admin"

volumes:
  pg_data: {}
  conduktor_data: {}

Container user and permissions

Console is running as a non-root user conduktor-platform with UID 10001 and GID 0. All files inside the container volume /var/conduktor are owned by conduktor-platform user.

Configure memory usage

We rely on container CGroups limits and use up to 80% of the container memory limit for JVM max heap size.
-XX:+UseContainerSupport -XX:MaxRAMPercentage=80
You only need to care about the limits that you set on your container.
# Values.yaml
...
platform:
  resources:
    limits:
      memory: 8Gi
...

Configure SSL/TLS

Depending on the environment, Conduktor might need to access external services (such as Kafka clusters, SSO servers, databases or object storage) that require a custom certificate for SSL/TLS communication. You can configure this using:
  • Console UI (recommended) - you can manage your certificates in a dedicated screen and configure SSL authentication from the broker setup wizard.
  • volume mount - this method is only required if you have LDAPS. Do not use it for Kafka or Kafka components.
Kafka clustersSchema registry / Kafka ConnectLDAPS, OIDC
SSL to secure data in transitUIUIUI
SSL to authenticate the clientUIUINot supported

Use the Conduktor certificate store

This option is recommended for Kafka, Kafka Connect and Schema Registry connections.
You can import and parse the certificates as text or files. The supported formats are:
  • .crt
  • .pem
  • .jks
  • .p12

Upload certificates

You can add cluster configurations from Settings > Clusters page. When you add the bootstrap server to your configuration, a check will be made to validate if the certificate is issued by a valid authority. If the response indicates the certificate is not issued by a valid authority, you have two options:
  • Skip SSL Check: This will skip validation of the SSL certificate on your server. This is an easy option for development environments with self-signed certificates
  • Upload Certificate: This option will enable you to upload the certificate (.crt, .pem, .jks or .p12 files), or paste the certificate as text
Cluster cert Upon uploading the certificate, you should then see the green icon indicating the connection is secure. Cluster connection

Add truststores

You can also manage organization truststores using the Settings > Certificates page. Simply add all of your certificates by uploading them or pasting them as text. In doing this, the SSL context will be derived when you configure Kafka, Kafka Connect and Schema Registry connections. Certs

Mount custom truststore

This option is recommended for SSO, DB or other external services requiring SSL/TLS communication.
Conduktor supports SSL/TLS connections using Java truststore.

Create TrustStore (JKS) from certificate in PEM format

If you already have a truststore, you can ignore this step. You need a keytool program that is usually packaged on JDK distributions and a certificate in PEM format (.pem or .crt).
keytool  \
    -importcert \
    -noprompt \
    -trustcacerts \
    -keystore ./truststore.jks \       # Output truststore jks file
    -alias "my-domain.com" \           # Certificate alias inside the truststore (usually the certificate subject)
    -file ./my-certificate-file.pem \  # Input certificate file
    -storepass changeit \              # Truststore password
    -storetype JKS

Configure custom truststore via Conduktor Console

Mount the truststore file into the conduktor-console container and pass the correct environment variables for locating truststore file inside the container (and password, if needed). If the truststore file is truststore.jks with password changeit, mount truststore file into /opt/conduktor/certs/truststore.jks inside the container. If run from Docker :
 docker run --rm \
   --mount "type=bind,source=$PWD/truststore.jks,target=/opt/conduktor/certs/truststore.jks" \
   -e CDK_SSL_TRUSTSTORE_PATH="/opt/conduktor/certs/truststore.jks" \
   -e CDK_SSL_TRUSTSTORE_PASSWORD="changeit" \
  conduktor/conduktor-console
From docker-compose :
services:
  conduktor-console:
    image: conduktor/conduktor-console
    ports:
      - 8080:8080
    volumes:
      - type: bind
        source: ./truststore.jks
        target: /opt/conduktor/certs/truststore.jks
        read_only: true
    environment:
      CDK_SSL_TRUSTSTORE_PATH: '/opt/conduktor/certs/truststore.jks'
      CDK_SSL_TRUSTSTORE_PASSWORD: 'changeit'

Client certificate authentication

This option is recommended for mTLS.
This mechanism uses TLS protocol to authenticate the client. Also known as:
  • Mutual SSL, Mutual TLS, mTLS
  • Two-Way SSL, SSL Certificate Authentication
  • Digital Certificate Authentication, Public Key Infrastructure (PKI) Authentication

Use the UI (keystore method)

Use the keystore file from your Kafka admin or provider (in .jks or .p12 format). Click the “Import from keystore” button to select a keystore file from your filesystem. Cluster keystore Fill in the required keystore password and key password and click “Import”. Import keystore You’ll get back to the cluster screen with the content of your keystore extracted into Access key and Access certificate. Cluster keystore import

Use the UI (Access key & Access certificate method)

Your Kafka Admin or your Kafka Provider gave you 2 files for authentication.
  • An Access key (.key file)
  • An Access certificate (.pem or .crt file)
Here’s an example with Aiven: Aiven certs You can paste the contents of the two files into Conduktor or import from keystore.

Use volume mount

You can mount the keystore file in the conduktor-console image:
services:
  conduktor-console:
    image: conduktor/conduktor-console
    ports:
      - 8080:8080
    volumes:
      - type: bind
        source: ./keystore.jks
        target: /opt/conduktor/certs/keystore.jks
        read_only: true
Then from the UI, choose the SSL Authentication method Keystore file is mounted on the volume and fill in the required fields

Configure Postgres database

Conduktor Console requires a Postgres database to store its state.

Postgres requirements

  • Postgres version 13 or higher
  • Provided connection role should have grant ALL PRIVILEGES on the configured database. Console should be able to create/update/delete schemas and tables on the database.
  • For your Postgres deployment use at least 1-2 vCPU, 1 GB of Ram, and 10 GB of disk.
If you want to use AWS RDS or AWS Aurora as a database with Console, consider the following: Console will not work with all PostgreSQL engines within RDS, it will only work with engine versions 14.8+ / 15.3+ (other versions are not fully supported).

Database configuration properties

  • database : is a key/value configuration consisting of:
    • database.url : database connection url in the format [jdbc:]postgresql://[user[:password]@][[netloc][:port],...][/dbname][?param1=value1&...]
    • database.hosts[].host : Postgresql server hosts name
    • database.hosts[].port : Postgresql server ports
    • database.host : Postgresql server host name (Deprecated. Use database.hosts instead)
    • database.port : Postgresql server port (Deprecated. Use database.hosts instead)
    • database.name : Database name
    • database.username : Database login role
    • database.password : Database login password
    • database.connection_timeout : Connection timeout option in seconds

URL format

Console supports both, the standard PostgreSQL URL and JDBC PostgreSQL. Connection username and password can be provided in the URL as basic authentication or as parameters.
database:
  url: 'jdbc:postgresql://user:password@host:5432/database' # or 'postgresql://host:5432/database?user=user&password=password'

SSL support

By default, Console will try to connect to the database using SSL mode prefer. We plan to make this configurable in the future along with database certificate.

Setup

There are several options available when configuring an external database:
  1. From a single connection URL
    • With the CDK_DATABASE_URL environment variable.
    • With the database.url configuration field. In either case, this connection url is using a standard PostgreSQL url in the format [jdbc:]postgresql://[user[:password]@][[netloc][:port],...][/dbname][?param1=value1&...]
  2. From decomposed configuration fields
    • With the CDK_DATABASE_* env vars.
    • With the database.* on configuration file.
database:
  host: 'host'
  port: 5432
  name: 'database'
  username: 'user'
  password: 'password'
  connection_timeout: 30 # in seconds

Example

 docker run --rm \
  -p "8080:8080" \
  -e CDK_DATABASE_URL="postgresql://user:password@host:5432/database" \
  -e LICENSE_KEY="<your-license>" \
  conduktor/conduktor-console:latest
  • If all connection URLs and decomposed configuration fields are provided, the decomposed configuration fields take priority.
  • If an invalid connection URL or a mandatory configuration field (host, username or name) is missing, Conduktor will fail gracefully with a meaningful error message.
  • Before Console v1.2.0, the EMBEDDED_POSTGRES=false was mandatory to enable external Postgresql configuration.

Multi-host configuration

If you have a multi-host setup, you can configure the database connection with a list of hosts. Conduktor uses a PostgreSQL JDBC driver to connect to the database that supports multiple hosts in the connection url. To configure a multi-host setup, you can use the database.url configuration field with a list of hosts separated by commas:
database:
  url: 'jdbc:postgresql://user:password@host1:5432,host2:5432/database'
or with decomposed configuration fields:
database:
  hosts: 
   - host: 'host1'
     port: 5432
   - host: 'host2' 
     port: 5432
  name: 'database'
  username: 'user'
  password: 'password'
  connection_timeout: 30 # in seconds
You can also provide JDBC connection parameter targetServerType to specify the target server type for the connection:
database:
  url: 'jdbc:postgresql://user:password@host1:5432,host2:5432/database?targetServerType=primary'
Nearly all targetServerType are supported: any, primary, master, slave, secondary, preferSlave, preferSecondary and preferPrimary.

Debug Console

Conduktor Console Docker image runs on Ubuntu Linux. It runs multiple services in a single Docker container. These services are supervised by supervisord. To troubleshoot Console:
  1. Verify that Console is up and running.
  2. Manually debug Conduktor Console.
  3. Check the logs and send them to our support team if necessary.

1. Verify that Conduktor is up and running

First, verify that all the components are running.
Get containers status
docker ps
Output
NAME                   IMAGE                                       COMMAND                  SERVICE                CREATED          STATUS                    PORTS
conduktor-console      conduktor/conduktor-console:1.21.0          "/__cacert_entrypoin…"   conduktor-console      10 minutes ago   Up 9 minutes (healthy)    0.0.0.0:8080->8080/tcp
conduktor-monitoring   conduktor/conduktor-console-cortex:1.21.0   "/opt/conduktor/scri…"   conduktor-monitoring   10 minutes ago   Up 10 minutes (healthy)   0.0.0.0:9009-9010->9009-9010/tcp, 0.0.0.0:9090->9090/tcp
postgres               postgres:15.1                               "docker-entrypoint.s…"   postgres               10 minutes ago   Up 10 minutes             0.0.0.0:5432->5432/tcp
If you’re using an external Kafka installation and external database, you will only need to verify that the conduktor-console container is showing healthy as the STATUS.If Console is showing an “exited” status, check the Docker logs by running the command (with the appropriate container name):
Get container logs
docker logs conduktor-console
You can save these logs in a file:
Store logs in a file
docker logs conduktor-console >& docker-logs-output.txt

2. Manually debug Conduktor Console

Check services within the conduktor-console container

First, we will need to invoke a shell within the conduktor-console container. For that, you can use the following commands:
docker exec -it conduktor-console bash
From within the container, you can verify that all expected services are started. Conduktor Console uses supervisord inside of the container to ensure various services are started:
Check services status
supervisorctl status
Output
console                          FATAL     Exited too quickly (process log may have details)
platform_api                     RUNNING   pid 39, uptime 0:49:39
proxy                            RUNNING   pid 33, uptime 0:49:39
In the example mentioned above, the console did not start successfully. This indicates that we need to look at the log files to investigate the issue further.

3. Get the logs and send them to support

Logs are kept in /var/conduktor/log. You can see them using:
List log files
ls /var/conduktor/log/
Output
console-stdout---supervisor-umscgn8w.log       proxy                                   proxy-stdout---supervisor-2gim6er7.log  supervisord.log
platform_api-stdout---supervisor-cqvwnsqi.log  proxy-stderr---supervisor-8i0bjkaz.log  startup.log
The best here is to simply bring all the logs to your local machine (in PWD) by running:
docker compose cp conduktor-console:/var/conduktor/log .
Then send these logs to oursupport team. If you’ve contacted us before, log into your account and create a ticket.

Healthcheck endpoints

Liveness endpoint

/api/health/live Returns a status HTTP 200 when Console is up.
cURL example
curl -s  http://localhost:8080/api/health/live
Could be used to set up probes on Kubernetes or docker-compose.

docker-compose probe setup

healthcheck:
  test:
    [
      'CMD-SHELL',
      'curl --fail http://localhost:${CDK_LISTENING_PORT:-8080}/api/health/live',
    ]
  interval: 10s
  start_period: 120s # Leave time for the psql init scripts to run
  timeout: 5s
  retries: 3

Kubernetes liveness probe

Port configuration
ports:
  - containerPort: 8080
    protocol: TCP
    name: httpprobe
Probe configuration
livenessProbe:
  httpGet:
    path: /api/health/live
    port: httpprobe
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5

Readiness/startup endpoint

/api/health/ready Returns readiness of the Console. Modules status :
  • NOTREADY (initial state)
  • READY
This endpoint returns a 200 status code if Console is in a READY state. Otherwise, it returns a 503 status code if Console fails to start.
cURL example
curl -s  http://localhost:8080/api/health/ready
# READY

Kubernetes startup probe

Port configuration

ports:
  - containerPort: 8080
    protocol: TCP
    name: httpprobe
Probe configuration
startupProbe:
    httpGet:
        path: /api/health/ready
        port: httpprobe
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 30

Console versions

/api/versions This endpoint exposes module versions used to build the Console along with the overall Console version.
cURL example
curl -s  http://localhost:8080/api/versions | jq .
# {
#  "platform": "1.27.0",
#  "platformCommit": "ed849cbd545bb4711985ce0d0c93ca8588a6b31f",
#  "console": "f97704187a7122f78ddc9110c09abdd1a9f9d470",
#  "console_web": "05dea2124c01dfd9479bc0eb22d9f7d8aed6911b"
# }

Configuration properties and environment variables

Docker image environment variables

Environment variableDescriptionDefault ValueSince Version
Logs
CDK_DEBUGEnable Console debug logs (equivalent to CDK_ROOT_LOG_LEVEL=DEBUG)false1.0.0
CDK_ROOT_LOG_LEVELSet the Console global log level (one of DEBUG, INFO, WARN, ERROR)INFO1.11.0
CDK_ROOT_LOG_FORMATSet logs format (one of TEXT, JSON)TEXT1.26.0
CDK_ROOT_LOG_COLOREnable ANSI colors in logstrue1.11.0
CDK_LOG_TIMEZONETimezone for dates in logs (in Olson timezone ID format, e.g. Europe/Paris)TZ environment variable or UTC if TZ is not defined1.28.0
Proxy settings
CDK_HTTP_PROXY_HOSTProxy hostname1.10.0
CDK_HTTP_PROXY_PORTProxy port801.10.0
CDK_HTTP_NON_PROXY_HOSTSList of hosts that should be reached directly, bypassing the proxy. Hosts must be separated by |, end with a * for wildcards, and not contain any /.1.10.0
CDK_HTTP_PROXY_USERNAMEProxy username1.10.0
CDK_HTTP_PROXY_PASSWORDProxy password1.10.0
SSL
CDK_SSL_TRUSTSTORE_PATHTruststore file path used by Console for Kafka, SSO, S3,… clients SSL/TLS verification1.5.0
CDK_SSL_TRUSTSTORE_PASSWORDTruststore password (optional)1.5.0
CDK_SSL_TRUSTSTORE_TYPETruststore type (optional)jks1.5.0
CDK_SSL_DEBUGEnable SSL/TLS debug logsfalse1.9.0
Java
CDK_GLOBAL_JAVA_OPTSCustom JAVA_OPTS parameters passed to Console1.10.0
CONSOLE_MEMORY_OPTSConfigure Java memory options-XX:+UseContainerSupport -XX:MaxRAMPercentage=801.18.0
Console
CDK_LISTENING_PORTConsole listening port80801.2.0
CDK_VOLUME_DIRVolume directory where Console stores data/var/conduktor1.0.2
CDK_IN_CONF_FILEConsole configuration file location/opt/conduktor/default-platform-config.yaml1.0.2
CDK_PLUGINS_DIRVolume directory for custom deserializer plugins/opt/conduktor/plugins1.22.0
Nginx
PROXY_BUFFER_SIZETune internal Nginx proxy_buffer_size8k1.16.0

Console properties reference

You have multiple options to configure Console: via environment variables, or via a YAML configuration file. You can find a mapping of the configuration fields in the platform-config.yaml to environment variables below. Environment variables can be set on the container or imported from a file. When importing from a file, mount the file into the container and provide its path by setting the environment variable CDK_ENV_FILE. Use a .env file with key value pairs.
MY_ENV_VAR1=value
MY_ENV_VAR2=otherValue
The logs will confirm, Sourcing environment variables from $CDK_ENV_FILE, or warn if set and the file is not found
Warning: CDK_ENV_FILE is set but the file does not exist or is not readable.
In case you set both environment variable and YAML value for a specific field, the environment variable will take precedence.
Lists start at index 0 and are provided using _idx_ syntax.

YAML property cases

YAML configuration supports multiple case formats (camelCase/kebab-case/lowercase) for property fragments such as:
  • clusters[].schemaRegistry.ignoreUntrustedCertificate
  • clusters[].schema-registry.ignore-untrusted-certificate
  • clusters[].schemaregistry.ignoreuntrustedcertificate
All are valid and equivalent in YAML.

Environment variable conversion

At startup, Conduktor Console will merge environment variables and YAML based configuration files into one unified configuration. The conversion rules are:
  • Filter for environment variables that start with CDK_
  • Remove the CDK_ prefix
  • Convert the variable name to lowercase
  • Replace _ with . for nested properties
  • Replace _[0-9]+_ with [0-9]. for list properties. (Lists start at index 0)
For example, the environment variables CDK_DATABASE_URL will be converted to database.url, or CDK_SSO_OAUTH2_0_OPENID_ISSUER will be converted into sso.oauth2[0].openid.issuer. The YAML equivalent would be:
database:
  url: "..."
sso:
  oauth2:
    - openid:
        issuer: "..."
When converting environment variables to YAML configuration, environment variables in UPPER-KEBAB-CASE will be converted to kebab-case in the YAML configuration.

Conversion edge cases

Because of YAML multiple case formats support, the conversion rules have some edge cases when trying to mix environment variables and YAML configuration. Extra rules when mixing environment variables and YAML configuration:
  • Don’t use camelCase in YAML configuration. Use kebab-case or lowercase
  • Stick to one compatible case format for a given property fragment using the following compatibility matrix
Compatibility matrix:
YAML\EnvironmentUPPER-KEBAB-CASEUPPERCASE
kebab-case🚫
lowercase🚫
camelCase🚫🚫
For example, CDK_CLUSTERS_0_SCHEMAREGISTRY_IGNOREUNTRUSTEDCERTIFICATE environment variable:
# Is equivalent to and compatible with
clusters:
  - schemaregistry:
      ignoreuntrustedcertificate: true
# but not with
clusters:
  - schema-registry:
      ignore-untrusted-certificate: true
And CDK_CLUSTERS_0_SCHEMA-REGISTRY_IGNORE-UNTRUSTED-CERTIFICATE, that’s why camelCase is not recommended in YAML configuration when mixing with environment variables.

Support of shell expansion in the YAML configuration file

Console supports shell expansion for environment variables and home tilde ~. This is useful if you have to use custom environment variables in your configuration. For example, you can use the following syntax:
YAML configuration file
database:
  url: "jdbc:postgresql://${DB_LOGIN}:${DB_PWD}@${DB_HOST}:${DB_PORT:-5432}/${DB_NAME}"
with the following environment variables:
Environment variableValue
DB_LOGINusr
DB_PWDpwd
DB_HOSTsome_host
DB_NAMEcdk
This will be expanded to:
Expanded configuration
database:
  url: "jdbc:postgresql://usr:pwd@some_host:5432/cdk"
If you want to escape the shell expansion, you can use the following syntax: $$. For example, if you want admin.password to be secret$123, you should set admin.password: "secret$$123".

File path environment variables

When an environment variable ending with _FILE is set to a file path, its corresponding unprefixed environment variable will be replaced with the content of that file. For example, if you set CDK_LICENSE_FILE=/run/secrets/license, the value of CDK_LICENSE will be overridden by the content of the file located at /run/secrets/license.
The CDK_IN_CONF_FILE is not supported.

Global properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
organization.nameYour organization’s nameCDK_ORGANIZATION_NAMEfalsestring"default"
admin.emailYour organization’s root administrator account emailCDK_ADMIN_EMAILtruestring
admin.passwordYour organization’s root administrator account password. Must be at least 8 characters in length, and include at least 1 uppercase letter, 1 lowercase letter, 1 number, and 1 special symbolCDK_ADMIN_PASSWORDtruestring
licenseEnterprise license key. If not provided, fallback to free plan.CDK_LICENSE or LICENSE_KEYfalsestring
platform.external.urlForce Console external URL. Useful for SSO callback URL when using a reverse proxy. By default, Console will try to guess it automatically using X-Forwarded-* headers coming from upstream reverse proxy.CDK_PLATFORM_EXTERNAL_URLfalsestring
platform.https.cert.pathPath to the SSL certificate fileCDK_PLATFORM_HTTPS_CERT_PATHfalsestring
platform.https.key.pathPath to the SSL private key fileCDK_PLATFORM_HTTPS_KEY_PATHfalsestring
enable_product_metricsIn order to improve Conduktor Console, we collect anonymous usage metrics. Set to false, this configuration disable all of our metrics collection.CDK_ENABLE_PRODUCT_METRICSfalsebooleantrue

Database properties

See database configuration for details.
PropertyDescriptionEnvironment variableMandatoryTypeDefault
database.urlExternal PostgreSQL configuration URL in format [jdbc:]postgresql://[user[:password]@][[netloc][:port],...][/dbname][?param1=value1&...] CDK_DATABASE_URLfalsestring
database.hosts[].hostExternal PostgreSQL servers hostnameCDK_DATABASE_HOSTS_0_HOSTfalsestring
database.hosts[].portExternal PostgreSQL servers portCDK_DATABASE_HOSTS_0_PORTfalseint
database.hostExternal PostgreSQL server hostname (Deprecated, use database.hosts instead)CDK_DATABASE_HOSTfalsestring
database.portExternal PostgreSQL server port (Deprecated, use database.hosts instead)CDK_DATABASE_PORTfalseint
database.nameExternal PostgreSQL database nameCDK_DATABASE_NAMEfalsestring
database.usernameExternal PostgreSQL login roleCDK_DATABASE_USERNAMEfalsestring
database.passwordExternal PostgreSQL login passwordCDK_DATABASE_PASSWORDfalsestring
database.connection_timeoutExternal PostgreSQL connection timeout in secondsCDK_DATABASE_CONNECTIONTIMEOUTfalseint

Session lifetime properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault value
auth.sessionLifetimeMax session lifetime in secondsCDK_AUTH_SESSIONLIFETIMEfalseint259200
auth.idleTimeoutMax idle session time in seconds (access token lifetime). Should be lower than auth.sessionLifetimeCDK_AUTH_IDLETIMEOUTfalseint259200

Local users properties

Optional local account list used to log into Console.
PropertyDescriptionEnvironment variableMandatoryTypeDefault value
auth.local-users[].emailUser loginCDK_AUTH_LOCALUSERS_0_EMAILtruestring"admin@conduktor.io"
auth.local-users[].passwordUser passwordCDK_AUTH_LOCALUSERS_0_PASSWORDtruestring"admin"

Monitoring properties

To see monitoring graphs and use alerts, you have to ensure that Cortex is also deployed.

Monitoring Configuration for Console

First, we need to configure Console to connect to Cortex services. By default, Cortex ports are:
  • Query port: 9009
  • Alert manager port: 9010
PropertyDescriptionEnvironment variableMandatoryTypeDefault
monitoring.cortex-urlCortex Search Query URL with port 9009CDK_MONITORING_CORTEXURLtruestring
monitoring.alert-manager-urlCortex Alert Manager URL with port 9010CDK_MONITORING_ALERTMANAGERURLtruestring
monitoring.callback-urlConsole APICDK_MONITORING_CALLBACKURLtruestring
monitoring.notifications-callback-urlWhere the Slack notification should redirectCDK_MONITORING_NOTIFICATIONCALLBACKURLtruestring
monitoring.clusters-refresh-intervalRefresh rate in seconds for metricsCDK_MONITORING_CLUSTERREFRESHINTERVALfalseint60
monitoring.use-aggregated-metricsDefines whether use the new aggregated metrics in the Console graphsCDK_MONITORING_USEAGGREGATEDMETRICSNoBooleanfalse
monitoring.enable-non-aggregated-metricsToggles the collection of obsolete granular metricsCDK_MONITORING_ENABLENONAGGREGATEDMETRICSNoBooleantrue
monitoring.use-aggregated-metrics and monitoring.enable-non-aggregated-metrics are temporary flags to help you transition to the new metrics collection system. They will be removed in a future release.Swap their default value if you experience performance issues when Console is connected with large Kafka clusters:
CDK_MONITORING_USEAGGREGATEDMETRICS: true
CDK_MONITORING_ENABLENONAGGREGATEDMETRICS: false

Monitoring configuration for Cortex

See Cortex configuration for details.

SSO properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
sso.ignoreUntrustedCertificateDisable SSL checksCDK_SSO_IGNOREUNTRUSTEDCERTIFICATEfalsebooleanfalse
sso.trustedCertificatesSSL public certificates for SSO authentication (LDAPS and OAuth2) as PEMCDK_SSO_TRUSTEDCERTIFICATESfalsestring

LDAP properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
sso.ldap[].nameLdap connection nameCDK_SSO_LDAP_0_NAMEtruestring
sso.ldap[].serverLdap server host and portCDK_SSO_LDAP_0_SERVERtruestring
sso.ldap[].managerDnSets the manager DNCDK_SSO_LDAP_0_MANAGERDNtruestring
sso.ldap[].managerPasswordSets the manager passwordCDK_SSO_LDAP_0_MANAGERPASSWORDtruestring
sso.ldap[].search-subtreeSets if the subtree should be searched.CDK_SSO_LDAP_0_SEARCHSUBTREEfalsebooleantrue
sso.ldap[].search-baseSets the base DN to search.CDK_SSO_LDAP_0_SEARCHBASEtruestring
sso.ldap[].search-filterSets the search filter. By default, the filter is set to (uid={0}) for users using class type InetOrgPerson.CDK_SSO_LDAP_0_SEARCHFILTERfalsestring"(uid={0})"
sso.ldap[].search-attributesSets the attributes list to return. By default, all attributes are returned. Platform search for uid, cn, mail, email, givenName, sn, displayName attributes to map into user token.CDK_SSO_LDAP_0_SEARCHATTRIBUTESfalsestring array[]
sso.ldap[].groups-enabledSets if group search is enabled.CDK_SSO_LDAP_0_GROUPSENABLEDfalsebooleanfalse
sso.ldap[].groups-subtreeSets if the subtree should be searched.CDK_SSO_LDAP_0_GROUPSSUBTREEfalsebooleantrue
sso.ldap[].groups-baseSets the base DN to search from.CDK_SSO_LDAP_0_GROUPSBASEtruestring
sso.ldap[].groups-filterSets the group search filter. If using group class type GroupOfUniqueNames use the filter "uniqueMember={0}". For group class GroupOfNames use "member={0}".CDK_SSO_LDAP_0_GROUPSFILTERfalsestring"uniquemember={0}"
sso.ldap[].groups-filter-attributeSets the name of the user attribute to bind to the group search filter. Defaults to the user’s DN.CDK_SSO_LDAP_0_GROUPSFILTERATTRIBUTEfalsestring
sso.ldap[].groups-attributeSets the group attribute name. Defaults to cn.CDK_SSO_LDAP_0_GROUPSATTRIBUTEfalsestring"cn"
sso.ldap[].propertiesAdditional properties that will be passed to identity provider context.CDK_SSO_LDAP_0_PROPERTIESfalsedictionary

OAuth2 properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
sso.oauth2[].nameOAuth2 connection nameCDK_SSO_OAUTH2_0_NAMEtruestring
sso.oauth2[].defaultUse as defaultCDK_SSO_OAUTH2_0_DEFAULTtrueboolean
sso.oauth2[].client-idOAuth2 client IDCDK_SSO_OAUTH2_0_CLIENTIDtruestring
sso.oauth2[].client-secretOAuth2 client secretCDK_SSO_OAUTH2_0_CLIENTSECRETtruestring
sso.oauth2[].openid.issuerIssuer to check on tokenCDK_SSO_OAUTH2_0_OPENID_ISSUERtruestring
sso.oauth2[].scopesScopes to be requested in the client credentials requestCDK_SSO_OAUTH2_0_SCOPEStruestring[]
sso.oauth2[].groups-claimGroup attribute from your identity providerCDK_SSO_OAUTH2_0_GROUPSCLAIMfalsestring
sso.oauth2[].username-claimEmail attribute from your identity providerCDK_SSO_OAUTH2_0_USERNAMECLAIMfalsestringemail
sso.oauth2[].allow-unsigned-id-tokensAllow unsigned ID tokensCDK_SSO_OAUTH2_0_ALLOWUNSIGNEDIDTOKENSfalsebooleanfalse
sso.oauth2[].preferred-jws-algorithmConfigure preferred JWS algorithmCDK_SSO_OAUTH2_0_PREFERREDJWSALGORITHMfalsestring one of: “HS256”, “HS384”, “HS512”, “RS256”, “RS384”, “RS512”, “ES256”, “ES256K”, “ES384”, “ES512”, “PS256”, “PS384”, “PS512”, “EdDSA”
sso.oauth2-logoutWether the central identity provider logout should be called or notCDK_SSO_OAUTH2LOGOUTfalsebooleantrue

JWT auth properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
sso.jwt-auth.issuerIssuer of your identity providerCDK_SSO_JWTAUTH_ISSUERtruestring
sso.jwt-auth.username-claimEmail attribute from your identity providerCDK_SSO_JWTAUTH_USERNAMECLAIMfalsestringemail
sso.jwt-auth.groups-claimGroup attribute from your identity providerCDK_SSO_JWTAUTH_GROUPSCLAIMfalsestringgroups
sso.jwt-auth.api-key-claimAPI key attribute from your identity providerCDK_SSO_JWTAUTH_APIKEYCLAIMfalsestringapikey

Kafka cluster properties

The new recommended way to configure clusters is through the CLI and YAML manifests.
PropertyDescriptionEnvironment variableMandatoryTypeDefault
clusters[].idString used to uniquely identify your Kafka clusterCDK_CLUSTERS_0_IDtruestring
clusters[].nameAlias or user-friendly name for your Kafka clusterCDK_CLUSTERS_0_NAMEtruestring
clusters[].colorAttach a color to associate with your cluster in the UICDK_CLUSTERS_0_COLORfalsestring in hexadecimal format (#FFFFFF)random
clusters[].ignoreUntrustedCertificateSkip SSL certificate validationCDK_CLUSTERS_0_IGNOREUNTRUSTEDCERTIFICATEfalsebooleanfalse
clusters[].bootstrapServersList of host:port for your Kafka brokers separated by coma ,CDK_CLUSTERS_0_BOOTSTRAPSERVERStruestring
clusters[].propertiesAny cluster configuration propertiesCDK_CLUSTERS_0_PROPERTIESfalsestring where each line is a property

Kafka vendor specific properties

Note that you only need to set the Kafka cluster properties to use the core features of Console. However, you can get additional benefits by setting the flavor of your cluster. This corresponds to the Provider tab of your cluster configuration in Console.
PropertyDescriptionEnvironment variableMandatoryTypeDefault
clusters[].kafkaFlavor.typeKafka flavor type, one of Confluent, Aiven, GatewayCDK_CLUSTERS_0_KAFKAFLAVOR_TYPEfalsestring
Flavor is ConfluentManage Confluent Cloud service accounts, API keys, and ACLs
clusters[].kafkaFlavor.keyConfluent Cloud API KeyCDK_CLUSTERS_0_KAFKAFLAVOR_KEYtruestring
clusters[].kafkaFlavor.secretConfluent Cloud API SecretCDK_CLUSTERS_0_KAFKAFLAVOR_SECRETtruestring
clusters[].kafkaFlavor.confluentEnvironmentIdConfluent Environment IDCDK_CLUSTERS_0_KAFKAFLAVOR_CONFLUENTENVIRONMENTIDtruestring
clusters[].kafkaFlavor.confluentClusterIdConfluent Cluster IDCDK_CLUSTERS_0_KAFKAFLAVOR_CONFLUENTCLUSTERIDtruestring
Flavor is AivenManage Aiven service accounts and ACLs
clusters[].kafkaFlavor.apiTokenAiven API tokenCDK_CLUSTERS_0_KAFKAFLAVOR_APITOKENtruestring
clusters[].kafkaFlavor.projectAiven projectCDK_CLUSTERS_0_KAFKAFLAVOR_PROJECTtruestring
clusters[].kafkaFlavor.serviceNameAiven service nameCDK_CLUSTERS_0_KAFKAFLAVOR_SERVICENAMEtruestring
Flavor is GatewayManage Conduktor Gateway interceptors
clusters[].kafkaFlavor.urlGateway API endpoint URLCDK_CLUSTERS_0_KAFKAFLAVOR_URLtruestring
clusters[].kafkaFlavor.userGateway API usernameCDK_CLUSTERS_0_KAFKAFLAVOR_USERtruestring
clusters[].kafkaFlavor.passwordGateway API passwordCDK_CLUSTERS_0_KAFKAFLAVOR_PASSWORDtruestring
clusters[].kafkaFlavor.virtualClusterGateway virtual clusterCDK_CLUSTERS_0_KAFKAFLAVOR_VIRTUALCLUSTERtruestring

Schema registry properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
clusters[].schemaRegistry.urlThe schema registry URLCDK_CLUSTERS_0_SCHEMAREGISTRY_URLtruestring
clusters[].schemaRegistry.ignoreUntrustedCertificateSkip SSL certificate validationCDK_CLUSTERS_0_SCHEMAREGISTRY_IGNOREUNTRUSTEDCERTIFICATEfalsebooleanfalse
clusters[].schemaRegistry.propertiesAny schema registry configuration parametersCDK_CLUSTERS_0_SCHEMAREGISTRY_PROPERTIESfalsestring where each line is a property
Basic Authentication
clusters[].schemaRegistry.security.usernameBasic auth usernameCDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_USERNAMEfalsestring
clusters[].schemaRegistry.security.passwordBasic auth passwordCDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_PASSWORDfalsestring
Bearer Token Authentication
clusters[].schemaRegistry.security.tokenBearer auth tokenCDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_TOKENfalsestring
mTLS Authentication
clusters[].schemaRegistry.security.keyAccess KeyCDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_KEYfalsestring
clusters[].schemaRegistry.security.certificateChainAccess certificateCDK_CLUSTERS_0_SCHEMAREGISTRY_SECURITY_CERTIFICATECHAINfalsestring

Amazon Glue schema registry properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
clusters[].schemaRegistry.regionThe Glue schema registry regionCDK_CLUSTERS_0_SCHEMAREGISTRY_REGIONtruestring
clusters[].schemaRegistry.registryNameThe Glue schema registry nameCDK_CLUSTERS_0_SCHEMAREGISTRY_REGISTRYNAMEfalsestring
clusters[].schemaRegistry.amazonSecurity.typeAuthentication with credentials, one of Credentials, FromContext, FromRoleCDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_TYPEtruestring
Credentials Security
clusters[].schemaRegistry.amazonSecurity.accessKeyIdCredentials auth access keyCDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_ACCESSKEYIDtruestring
clusters[].schemaRegistry.amazonSecurity.secretKeyCredentials auth secret keyCDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_SECRETKEYtruestring
FromContext Security
clusters[].schemaRegistry.amazonSecurity.profileAuthentication profileCDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_PROFILEfalsestring
FromRole Security
clusters[].schemaRegistry.amazonSecurity.roleAuthentication roleCDK_CLUSTERS_0_SCHEMAREGISTRY_AMAZONSECURITY_ROLEtruestring

Kafka Connect properties

PropertyDescriptionEnvironment variableMandatoryTypeDefault
clusters[].kafkaConnects[].idString used to uniquely identify your Kafka ConnectCDK_CLUSTERS_0_KAFKACONNECTS_0_IDtruestring
clusters[].kafkaConnects[].nameName your Kafka ConnectCDK_CLUSTERS_0_KAFKACONNECTS_0_NAMEtruestring
clusters[].kafkaConnects[].urlThe Kafka connect URLCDK_CLUSTERS_0_KAFKACONNECTS_0_URLtruestring
clusters[].kafkaConnects[].headersOptional additional headers (ie: X-API-Token=123,X-From=Test)CDK_CLUSTERS_0_KAFKACONNECTS_0_HEADERSfalsestring
clusters[].kafkaConnects[].ignoreUntrustedCertificateSkip SSL certificate validationCDK_CLUSTERS_0_KAFKACONNECTS_0_IGNOREUNTRUSTEDCERTIFICATEfalsebooleanfalse
Basic Authentication
clusters[].kafkaConnects[].security.usernameBasic auth usernameCDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_USERNAMEfalsestring
clusters[].kafkaConnects[].security.passwordBasic auth passwordCDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_PASSWORDfalsestring
Bearer Token Authentication
clusters[].kafkaConnects[].security.tokenBearer tokenCDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_TOKENfalsestring
mTLS Authentication
clusters[].kafkaConnects[].security.keyAccess keyCDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_KEYfalsestring
clusters[].kafkaConnects[].security.certificateChainAccess certificateCDK_CLUSTERS_0_KAFKACONNECTS_0_SECURITY_CERTIFICATECHAINfalsestring

ksqlDB properties

We support ksqlDB integration as of Conduktor Console v1.21.0.
PropertyDescriptionEnvironment variableMandatoryTypeDefault
clusters[].ksqlDBs[].idString used to uniquely identify your ksqlDB ClusterCDK_CLUSTERS_0_KSQLDBS_0_IDtruestring
clusters[].ksqlDBs[].nameName of your ksqlDB ClusterCDK_CLUSTERS_0_KSQLDBS_0_NAMEtruestring
clusters[].ksqlDBs[].urlThe ksqlDB API URLCDK_CLUSTERS_0_KSQLDBS_0_URLtruestring
clusters[].ksqlDBs[].ignoreUntrustedCertificateSkip SSL certificate validationCDK_CLUSTERS_0_KSQLDBS_0_IGNOREUNTRUSTEDCERTIFICATEfalsebooleanfalse
Basic Authentication
clusters[].ksqlDBs[].security.usernameBasic auth usernameCDK_CLUSTERS_0_KSQLDBS_0_SECURITY_USERNAMEfalsestring
clusters[].ksqlDBs[].security.passwordBasic auth passwordCDK_CLUSTERS_0_KSQLDBS_0_SECURITY_PASSWORDfalsestring
Bearer Token Authentication
clusters[].ksqlDBs[].security.tokenBearer tokenCDK_CLUSTERS_0_KSQLDBS_0_SECURITY_TOKENfalsestring
mTLS Authentication
clusters[].ksqlDBs[].security.keyAccess keyCDK_CLUSTERS_0_KSQLDBS_0_SECURITY_KEYfalsestring
clusters[].ksqlDBs[].security.certificateChainAccess certificateCDK_CLUSTERS_0_KSQLDBS_0_SECURITY_CERTIFICATECHAINfalsestring

AuditLog export properties

The audit log can be exported to a Kafka topic, once configured in Console.
PropertyDescriptionEnvironment variableMandatoryTypeDefault
audit_log_publisher.clusterThe cluster ID where the audit logs will be exportedCDK_AUDITLOGPUBLISHER_CLUSTERfalsestring
audit_log_publisher.topicNameThe topic name where the audit logs will be exportedCDK_AUDITLOGPUBLISHER_TOPICNAMEfalsestring
audit_log_publisher.topicConfig.partitionThe number of partitions for the audit log topicCDK_AUDITLOGPUBLISHER_TOPICCONFIG_PARTITIONfalseint1
audit_log_publisher.topicConfig.replicationFactorThe replication factor for the audit log topicCDK_AUDITLOGPUBLISHER_TOPICCONFIG_REPLICATIONFACTORfalseint1

Conduktor SQL properties

In order to use Conduktor SQL, you need to configure a second database to store the topics data. You can configure Conduktor SQL Database using CDK_KAFKASQL_DATABASE_URL or set each value individually with CDK_KAFKASQL_DATABASE_*. Configure SQL to get started.
PropertyDescriptionEnvironment variableMandatoryTypeDefault
kafka_sql.database.urlExternal PostgreSQL configuration URL in format [jdbc:]postgresql://[user[:password]@][[netloc][:port],...][/dbname][?param1=value1&...] CDK_KAFKASQL_DATABASE_URLfalsestring
kafka_sql.database.hosts[].hostExternal PostgreSQL servers hostnameCDK_KAFKASQL_DATABASE_HOSTS_0_HOSTfalsestring
kafka_sql.database.hosts[].portExternal PostgreSQL servers portCDK_KAFKASQL_DATABASE_HOSTS_0_PORTfalseint
kafka_sql.database.hostExternal PostgreSQL server hostname (Deprecated, use kafka_sql.database.hosts instead)CDK_KAFKASQL_DATABASE_HOSTfalsestring
kafka_sql.database.portExternal PostgreSQL server port (Deprecated, use kafka_sql.database.hosts instead)CDK_KAFKASQL_DATABASE_PORTfalseint
kafka_sql.database.nameExternal PostgreSQL database nameCDK_KAFKASQL_DATABASE_NAMEfalsestring
kafka_sql.database.usernameExternal PostgreSQL login roleCDK_KAFKASQL_DATABASE_USERNAMEfalsestring
kafka_sql.database.passwordExternal PostgreSQL login passwordCDK_KAFKASQL_DATABASE_PASSWORDfalsestring
kafka_sql.database.connection_timeoutExternal PostgreSQL connection timeout in secondsCDK_KAFKASQL_DATABASE_CONNECTIONTIMEOUTfalseint
Advanced properties:
PropertyDescriptionEnvironment variableMandatoryTypeDefault
kafka_sql.commit_offset_every_in_secFrequency at which Conduktor SQL commits offsets into Kafka and flushes rows in the databaseCDK_KAFKASQL_COMMITOFFSETEVERYINSECfalseint30 (seconds)
kafka_sql.clean_expired_record_every_in_hourHow often to check for expired records and delete them from the databaseCDK_KAFKASQL_CLEANEXPIREDRECORDEVERYINHOURfalseint1 (hour)
kafka_sql.refresh_topic_configuration_every_in_secFrequency at which Conduktor SQL looks for new topics to start indexing or stop indexingCDK_KAFKASQL_REFRESHTOPICCONFIGURATIONEVERYINSECfalseint30 (seconds)
kafka_sql.consumer_group_idConsumer group used to identify Conduktor SQLCDK_KAFKASQL_CONSUMER-GROUP-IDfalsestringconduktor-sql
kafka_sql.refresh_user_permissions_every_in_secFrequency at which Conduktor SQL refreshes the role permissions in the DB to match the RBAC setup in ConsoleCDK_KAFKASQL_REFRESHUSERPERMISSIONSEVERYINSECfalsestringconduktor-sql

Partner Zones properties

Advanced configuration for Partner Zones.
PropertyDescriptionEnvironment variableMandatoryTypeDefault
partner_zone.reconcile-with-gateway-every-secondsThe interval at which Partner Zone’s state (that’s stored on Console) is synchronized with Gateway. A lower value results in faster alignment between the required state and the current state on Gateway.CDK_PARTNERZONE_RECONCILEWITHGATEWAYEVERYSECONDSfalseint5 (seconds)

Configure HTTP proxy

Specify the proxy settings for Conduktor to use when accessing Internet. The HTTP proxy works for both HTTP and HTTPS connection. There are five properties you can set to specify the proxy that will be used by the HTTP protocol handler:
  • CDK_HTTP_PROXY_HOST: the host name of the proxy server
  • CDK_HTTP_PROXY_PORT: the port number. Default value is 80.
  • CDK_HTTP_NON_PROXY_HOSTS: a list of hosts that should be reached directly, bypassing the proxy. This is a list of patterns separated by |. The patterns may start or end with a * for wildcards, we do not support /. Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.
  • CDK_HTTP_PROXY_USERNAME: the proxy username
  • CDK_HTTP_PROXY_PASSWORD: the proxy password

Example

services:
  conduktor-console:
    image: conduktor/conduktor-console
    ports:
      - 8080:8080
    environment:
      CDK_HTTP_PROXY_HOST: "proxy.mydomain.com"
      CDK_HTTP_PROXY_PORT: 8000
      CDK_HTTP_NON_PROXY_HOSTS: "*.mydomain.com"

Configure HTTPS

To configure Conduktor Console to respond to HTTPS requests, you have to define a certificate and a private key. The server certificate is a public entity that’s sent to every client that connects to the server and it should be provided as a PEM file. Configuration properties are:
  • platform.https.cert.path or environment variable CDK_PLATFORM_HTTPS_CERT_PATH: the path to server certificate file
  • platform.https.key.path or environment variable CDK_PLATFORM_HTTPS_KEY_PATH: the path to server private key file
Both the certificate and private key files have to allow read from user conduktor-platform (UID 10001 GID 0) but don’t need to be readable system-wide.

Sample configuration using docker-compose

In this example, server certificate and key (server.crt and server.key) are stored in the same directory as the docker-compose file.
services:
  conduktor-console:
    image: conduktor/conduktor-console
    ports:
      - 8080:8080
    volumes: 
      - type: bind
        source: ./server.crt
        target: /opt/conduktor/certs/server.crt
        read_only: true
      - type: bind
        source: ./server.key
        target: /opt/conduktor/certs/server.key
        read_only: true
    environment:
      CDK_PLATFORM_HTTPS_CERT_PATH: '/opt/conduktor/certs/server.crt'
      CDK_PLATFORM_HTTPS_KEY_PATH: '/opt/conduktor/certs/server.key'
If the monitoring image conduktor/conduktor-console-cortex is running as well, you have to provide the CA public certificate to the monitoring image to allow metrics scraping on HTTPS.
 services:
   conduktor-console:
     image: conduktor/conduktor-console
     ports:
       - 8080:8080
     volumes:
       - type: bind
         source: ./server.crt
         target: /opt/conduktor/certs/server.crt
         read_only: true
       - type: bind
         source: ./server.key
         target: /opt/conduktor/certs/server.key
         read_only: true
     environment:
       # HTTPS configuration
       CDK_PLATFORM_HTTPS_CERT_PATH: '/opt/conduktor/certs/server.crt'
       CDK_PLATFORM_HTTPS_KEY_PATH: '/opt/conduktor/certs/server.key'
       # monitoring configuration
       CDK_MONITORING_CORTEX-URL: http://conduktor-monitoring:9009/
       CDK_MONITORING_ALERT-MANAGER-URL: http://conduktor-monitoring:9010/
       CDK_MONITORING_CALLBACK-URL: https://conduktor-console:8080/monitoring/api/
       CDK_MONITORING_NOTIFICATIONS-CALLBACK-URL: http://localhost:8080
       
   conduktor-monitoring:
     image: conduktor/conduktor-console-cortex
     volumes:
       - type: bind
         source: ./server.crt
         target: /opt/conduktor/certs/server.crt
         read_only: true
     environment:
       CDK_CONSOLE-URL: "https://conduktor-console:8080"
       CDK_SCRAPER_SKIPSSLCHECK: "false" # can be set to true if you don't want to check the certificate
       CDK_SCRAPER_CAFILE: "/opt/conduktor/certs/server.crt"