Skip to main content
Quick navigation

Environment variables

To configure Conduktor Gateway, we recommend setting up environment variables. They can be set in the Gateway container or taken from a file. To make sure the values were set correctly, check the startup logs.

Use the Gateway container

You can set the environment variables during the docker-run command with -e or --env:

docker run -d \
-e KAFKA_BOOTSTRAP_SERVERS=kafka1:9092,kafka2:9092 \
-e KAFKA_SECURITY_PROTOCOL=SASL_PLAINTEXT \
-e KAFKA_SASL_MECHANISM=PLAIN \
-e KAFKA_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username='usr' password='pwd';" \
-p 6969:6969 \
conduktor/conduktor-gateway:latest

Or in a docker-compose.yaml:

services:
conduktor-gateway:
image: conduktor/conduktor-gateway:latest
ports:
- 6969:6969
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka1:9092,kafka2:9092
KAFKA_SECURITY_PROTOCOL: SASL_PLAINTEXT
KAFKA_SASL_MECHANISM: PLAIN
KAFKA_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username='usr' password='pwd';

Use a file

You can mount a file with the key-value pairs into the container and provide its path by setting the environment variable GATEWAY_ENV_FILE.

Example
KAFKA_BOOTSTRAP_SERVERS=kafka1:9092,kafka2:9092
KAFKA_SECURITY_PROTOCOL=SASL_PLAINTEXT
KAFKA_SASL_MECHANISM=PLAIN
KAFKA_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username='usr' password='pwd';

You'll get a confirmation in the logs: Sourcing environment variables from $GATEWAY_ENV_FILE (or a warning if the file isn't found: Warning: GATEWAY_ENV_FILE is set but the file does not exist or is not readable.).

Networking

Port and SNI routing

Environment variableDescriptionDefault value
Common properties
GATEWAY_ADVERTISED_HOSTThe hostname returned in the Gateway’s metadata for clients to connect to.Your hostname
GATEWAY_ROUTING_MECHANISMDefines the routing method: port for port routing, host for SNI routing.port
GATEWAY_PORT_STARTThe first port the Gateway listens on.6969
GATEWAY_MIN_BROKERIDThe broker ID associated with the first port (GATEWAY_PORT_START). Should match the lowest broker.id (or node.id) in the Kafka cluster.0
GATEWAY_BIND_HOSTThe network interface the Gateway binds to.0.0.0.0
Port routing specific
GATEWAY_PORT_COUNTThe total number of ports used by Gateway.(maxBrokerId - minBrokerId) + 3
SNI routing specific
GATEWAY_ADVERTISED_SNI_PORTThe port returned in the Gateway’s metadata for clients to connect to when using SNI routing.GATEWAY_PORT_START
GATEWAY_ADVERTISED_HOST_PREFIXConfigures the advertised broker names.broker
GATEWAY_SECURITY_PROTOCOLDefines the security protocol that clients should use to connect to Gateway. Must be set to SSL, SASL_SSL, or DELEGATED_SASL_SSL for SNI routing.The default value depends on KAFKA_SECURITY_PROTOCOL.
GATEWAY_SNI_HOST_SEPARATORThe separator used to construct returned metadata.-

Load balancing

Environment variableDescriptionDefault value
GATEWAY_CLUSTER_IDA unique identifier for a given Gateway cluster, used to establish Gateway cluster membership for load balancing.gateway
GATEWAY_FEATURE_FLAGS_INTERNAL_LOAD_BALANCINGWhether to use Conduktor Gateway's internal load balancer to balance connections between Gateway instances.true
GATEWAY_RACK_IDSimilar to broker.rack.

HTTP API

Environment variableDescriptionDefault value
GATEWAY_HTTP_PORTThe port on which Gateway will present the HTTP management API.8888
GATEWAY_SECURED_METRICSDetermines whether the HTTP management API requires authentication.true
GATEWAY_ADMIN_API_USERSUsers that can access the API. Admin access is required for write operations. To grant read-only access, set admin: true.[{username: admin, password: conduktor, admin: true}]
HTTPS configuration
GATEWAY_HTTPS_KEY_STORE_PATHEnables HTTPS and specifies the keystore to use for TLS connections.
GATEWAY_HTTPS_KEY_STORE_PASSWORDSets the password for the keystore used in HTTPS TLS connections.
GATEWAY_HTTPS_CLIENT_AUTHClient authentication configuration for mTLS. Possible values: NONE, REQUEST, REQUIRED.NONE
GATEWAY_HTTPS_TRUST_STORE_PATHSpecifies the truststore used for mTLS.
GATEWAY_HTTPS_TRUST_STORE_PASSWORDPassword for the truststore defined above.

Upstream connection

Environment variableDescriptionDefault value
GATEWAY_UPSTREAM_CONNECTION_POOL_TYPEUpstream connection pool type. Possible values are NONE (no connection pool), ROUND_ROBIN (Round robin selected connection pool)NONE
GATEWAY_UPSTREAM_NUM_CONNECTIONThe number of connections between Conduktor Gateway and Kafka per upstream thread. Used only when ROUND_ROBIN is enabled.10

Licensing

Environment variableDescriptionDefault value
GATEWAY_LICENSE_KEYLicense keyNone

Connect from Gateway to Kafka

Conduktor Gateway's connection to Kafka is configured by the KAFKA_ environment variables.

When translating Kafka's properties, use upper case instead and replace the . with _. For example:

When defining Gateway's Kafka property bootstrap.servers, declare it as the environment variable KAFKA_BOOTSTRAP_SERVERS. Any variable prefixed with KAFKA_ will be treated as a connection parameter by Gateway. You can find snippets for each security protocol.

Connect from clients to Gateway

Environment variableDescriptionDefault value
GATEWAY_SECURITY_PROTOCOLThe type of authentication clients should use to connect to Gateway. Valid values are: PLAINTEXT, SASL_PLAINTEXT, SASL_SSL, SSL, DELEGATED_SASL_PLAINTEXT and DELEGATED_SASL_SSL.The default value depends on KAFKA_SECURITY_PROTOCOL.
GATEWAY_FEATURE_FLAGS_MANDATORY_VCLUSTERIf no virtual cluster was detected, the user then automatically falls back into the transparent virtual cluster called passthrough. Reject authentication if set to true and vcluster isn't configured for a principal.false
GATEWAY_ACL_ENABLEDEnable/disable ACLs support on the Gateway transparent virtual cluster (passthrough) only.false
GATEWAY_SUPER_USERSSemicolon-separated (;) list of service accounts that will be super users on Gateway (excluding virtual clusters).
Example: alice;bob.
Usernames from GATEWAY_ADMIN_API_USERS
GATEWAY_ACL_STORE_ENABLEDObsolete, use VirtualCluster resource now
Enable/disable ACLs support for Virtual Clusters only.
false
GATEWAY_AUTHENTICATION_CONNECTION_MAX_REAUTH_MSForce the client re-authentication after this amount of time. If set to 0, we never force the client to re-authenticate until the next connection0

SSL authentication

See client authentication for details.

Environment variableDescriptionDefault value
Keystore
GATEWAY_SSL_KEY_STORE_PATHPath to a mounted keystore for SSL connections
GATEWAY_SSL_KEY_STORE_PASSWORDPassword for the keystore defined above
GATEWAY_SSL_KEY_PASSWORDPassword for the key contained in the store above
GATEWAY_SSL_KEY_TYPEjksor pkcs12jks
GATEWAY_SSL_UPDATE_CONTEXT_INTERVAL_MINUTESInterval in minutes to refresh SSL context5
Truststore (for mTLS)
GATEWAY_SSL_TRUST_STORE_PATHPath to a keystore for SSL connections
GATEWAY_SSL_TRUST_STORE_PASSWORDPassword for the truststore defined above
GATEWAY_SSL_TRUST_STORE_TYPEjks, pkcs12jks
GATEWAY_SSL_CLIENT_AUTHNONE will not request client authentication, OPTIONAL will request client authentication, REQUIRE will require client authenticationNONE
GATEWAY_SSL_PRINCIPAL_MAPPING_RULESmTLS leverages SSL mutual authentication to identify a Kafka client. Principal for mTLS connection can be detected from the subject certificate using the same feature as in Apache Kafka, the SSL principal mappingExtracts the subject

OAuthbearer

Some of these definitions (e.g. SASL_OAUTHBEARER_JWKS_ENDPOINT_REFRESH) are taken from Kafka documentation.

Environment variableDescription
GATEWAY_OAUTH_JWKS_URLThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based.
GATEWAY_OAUTH_EXPECTED_ISSUERThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth iss claim and if this value is set, the broker will match it exactly against what is in the JWT's iss claim. If there's no match, the broker will reject the JWT and authentication will fail
GATEWAY_OAUTH_EXPECTED_AUDIENCESThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth aud claim and if this value is set, the broker will match the value from JWT's aud claim to see if there is an exact match. If there's no match, the broker will reject the JWT and authentication will fail.
GATEWAY_OAUTH_JWKS_REFRESHThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
GATEWAY_OAUTH_JWKS_RETRYThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts, up to a maximum wait length specified by sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms.
GATEWAY_OAUTH_JWKS_MAX_RETRYThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts, up to a maximum wait length specified by sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms.
GATEWAY_OAUTH_SCOPE_CLAIM_NAMEThe OAuth claim for the scope is often named scope but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims, if the OAuth/OIDC provider uses a different name for that claim.
GATEWAY_OAUTH_SUB_CLAIM_NAMEThe OAuth claim for the subject is often named sub, but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims, if the OAuth/OIDC provider uses a different name for that claim.
GATEWAY_OAUTH_USE_CC_POOL_IDSet to true to use the Confluent Cloud pool ID as the principal name. This is useful for Confluent Cloud users in Delegated mode who want to use the pool ID as the principal name instead of the sub claim.

Plain authentication

See client authentication for details.

Environment variableDescriptionDefault value
GATEWAY_USER_POOL_SECRET_KEYBase64 encoded value of 256bits long (e.g. openssl rand -base64 32). If using SASL_PLAIN or SASL_SSL, you have the ability to create local service accounts on Gateway. These service accounts will have credentials generated by Gateway based on the GATEWAY_USER_POOL_SECRET_KEY. We strongly recommend that you change this value for production deployments.A default value is used to sign tokens and has to be changed.
GATEWAY_USER_POOL_SERVICE_ACCOUNT_REQUIREDIf true, verify the existence of user mapping for the service account when the user connects in Non-Delegated SASL/PLAIN mode.false

Security provider

Environment variableDescriptionDefault value
GATEWAY_SECURITY_PROVIDERSpecify your security provider. It can be: DEFAULT (from your JRE), BOUNCY_CASTLE, BOUNCY_CASTLE_FIPS and CONSCRYPT. Please note that CONSCRYPT doesn't support Mac OS with aarch64.DEFAULT

Cluster switching / failover

Setting up your Kafka clusters for failover is similar to the standard setup, but you need to provide two sets of properties: one for your main cluster and one for your failover cluster.

You can define these properties as environment variables or load a cluster configuration file.

Environment variableDescription
GATEWAY_BACKEND_KAFKA_SELECTORIndicates the use of a configuration file and provides its path, e.g.: 'file: { path: /cluster-config.yaml }'.
KAFKA_FAILOVER_GATEWAY_ROLESTo turn Gateway into failover mode, set this to failover.
Main Cluster
KAFKA_MAIN_BOOTSTRAP_SERVERSBootstrap server.
KAFKA_MAIN_SECURITY_PROTOCOLSecurity protocol.
KAFKA_MAIN_SASL_MECHANISMSASL mechanism.
KAFKA_MAIN_SASL_JAAS_CONFIGSASL JAAS config.
Failover Cluster
KAFKA_FAILOVER_BOOTSTRAP_SERVERSBootstrap server.
KAFKA_FAILOVER_SECURITY_PROTOCOLSecurity protocol.
KAFKA_FAILOVER_SASL_MECHANISMSASL mechanism.
KAFKA_FAILOVER_SASL_JAAS_CONFIGSASL JAAS config.

Internal topics

As the Gateway is stateless, it uses Kafka topics to store its internal state. Use the following environment variables to configure these internal topics.

If missing, Gateway will automatically create the topics (if it has the permission to do so). You can also create the topics independently of Gateway, just make sure they are configured as described below.

Internal state

Firstly, there are some general configuration settings for Gateway internal state management which apply to all used topics.

Environment variableDescriptionDefault value
GATEWAY_GROUP_IDSet a consumer group name which will be used by Gateway to consume the internal license topic. This consumer group will be used by Gateways from the same cluster to recognize each other.conduktor_${GATEWAY_CLUSTER_ID}
GATEWAY_STORE_TTL_MSTime between full refresh.604800000
GATEWAY_TOPIC_STORE_KCACHE_REPLICATION_FACTORDefaults to the value defined in your cluster settings.-1

Topic names

Environment variableDescriptionDefault value
GATEWAY_LICENSE_TOPICTopic where the license is stored._conduktor_${GATEWAY_CLUSTER_ID}_license
GATEWAY_TOPIC_MAPPINGS_TOPICTopic where the topics aliases are stored._conduktor_${GATEWAY_CLUSTER_ID}_topicmappings
GATEWAY_USER_MAPPINGS_TOPICTopic where the service accounts are stored._conduktor_${GATEWAY_CLUSTER_ID}_usermappings
GATEWAY_CONSUMER_OFFSETS_TOPICTopic where the offsets for concentrated topic consumption are stored._conduktor_${GATEWAY_CLUSTER_ID}_consumer_offsets
GATEWAY_INTERCEPTOR_CONFIGS_TOPICTopic where the deployed interceptors are stored._conduktor_${GATEWAY_CLUSTER_ID}_interceptor_configs
GATEWAY_ENCRYPTION_CONFIGS_TOPICTopic where the encryption configuration is stored, in specific cases._conduktor_${GATEWAY_CLUSTER_ID}_encryption_configs
GATEWAY_ACLS_TOPICTopic where the ACLs managed by Gateway are stored._conduktor_${GATEWAY_CLUSTER_ID}_acls
GATEWAY_AUDIT_LOG_TOPICTopic where the Gateway audit log is stored._conduktor_${GATEWAY_CLUSTER_ID}_auditlogs
GATEWAY_VCLUSTERS_TOPICTopic where the virtual clusters are stored._conduktor_${GATEWAY_CLUSTER_ID}_vclusters
GATEWAY_GROUPS_TOPICTopic where the service account groups are stored._conduktor_${GATEWAY_CLUSTER_ID}_groups
GATEWAY_ENCRYPTION_KEYS_TOPICName of the topic for storing EDEKs when gateway KMS enabled in encryption interceptors_conduktor_${GATEWAY_CLUSTER_ID}_encryption_keys
GATEWAY_DATA_QUALITY_TOPICTopic where the data quality violation are stored._conduktor_${GATEWAY_CLUSTER_ID}_data_quality_violation

Required topic configuration

The most important setting is log.cleanup.policy which defines the clean up policy for the topic. Most of the topics used by Gateway are compacted, but some use time-based retention. If this isn't set up properly, Gateway will throw an error on startup. Set the following:

  • log.cleanup.policy=compact for compaction
  • log.cleanup.policy=delete for time based retention

If Gateway creates the topics for you, it will set the right values.

The second vital setting is the replication factor. This should be set to at least 3 in production environments to ensure that the data is safe (Gateway will warn you on startup, if it's set to less than three). When creating topics, Gateway uses the default value for your Kafka brokers for this setting.

For partition count, most of the topics are low volume and can operate well with only a single partition. This isn't enforced (Gateway will work with multi partition topics for internal state), however there is no need to have more than one partition.

The exception to this is the audit log topic which can have a lot of events written to it, if enabled for a busy cluster. We recommend starting with 3 partitions for audit logs (this doesn't affect Gateway performance as it is a writer, not a reader), but will impact any other consumers you may run reading from it.

TopicCleanup policyRecommended partitionsOther configuration
_conduktor_${GATEWAY_CLUSTER_ID}_licensecompact1
_conduktor_${GATEWAY_CLUSTER_ID}_topicmappingscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_usermappingscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_consumer_offsetscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_interceptor_configscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_encryption_configscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_aclscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_vclusterscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_groupscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_encryption_keyscompact1
_conduktor_${GATEWAY_CLUSTER_ID}_auditlogsdelete3We recommend a retention time of around 7 days for this topic due to its high volume.
_conduktor_${GATEWAY_CLUSTER_ID}_data_quality_violationdelete1

Internal setup

Threading

Environment variableDescriptionDefault value
GATEWAY_DOWNSTREAM_THREADThe number of threads dedicated to handling IO between clients and Conduktor Gateway.Number of cores
GATEWAY_UPSTREAM_THREADThe number of threads dedicated to handling IO between Kafka and Conduktor Gateway.Number of cores

Feature flags

Environment variableDescriptionDefault value
GATEWAY_FEATURE_FLAGS_AUDITWhether or not to enable the audit feature.true
GATEWAY_FEATURE_FLAGS_INTERNAL_LOAD_BALANCINGWhether or not to enable the Gateway internal load balancing.true

Monitoring

Audit

Environment variableDescriptionDefault value
GATEWAY_AUDIT_LOG_CONFIG_SPEC_VERSIONVersion of the log.0.1.0
GATEWAY_AUDIT_LOG_SERVICE_BACKING_TOPICTarget topic name._auditLogs
GATEWAY_AUDIT_LOG_REPLICATION_FACTOR_OF_TOPICReplication factor to be used when creating the audit topic, defaults to the one defined in your cluster settings.-1
GATEWAY_AUDIT_LOG_NUM_PARTITIONS_OF_TOPICNumber of partitions to be used when creating the audit topic, defaults to the one defined in your cluster settings.-1
GATEWAY_AUDIT_LOG_KAFKA_Overrides Kafka Producer configuration for audit logs, i.e.: GATEWAY_AUDIT_LOG_KAFKA_LINGER_MS=0

Logging

Environment variableDescriptionDefault valuePackage
LOG4J2_APPENDER_LAYOUTThe format to output Console logging. Use json for json layout or pattern for pattern layout.pattern
LOG4J2_IO_CONDUKTOR_PROXY_NETWORK_LEVELLow-level networking, connection mapping, authentication, authorization.infoio.conduktor.proxy.network
LOG4J2_IO_CONDUKTOR_UPSTREAM_THREAD_LEVELRequests processing and forwarding. At trace, log requests sent.infoio.conduktor.proxy.thread.UpstreamThread
LOG4J2_IO_CONDUKTOR_PROXY_REBUILDER_COMPONENTS_LEVELRequests and responses rewriting. Logs responses payload in debug (useful for checking METADATA).infoio.conduktor.proxy.rebuilder.components
LOG4J2_IO_CONDUKTOR_PROXY_SERVICE_LEVELVarious. Logs ACL checks and interceptor targeting at debug. Logs post-interceptor requests/response payload at trace.infoio.conduktor.proxy.service
LOG4J2_IO_CONDUKTOR_LEVELGet even more logs not covered by specific packages.infoio.conduktor
LOG4J2_ORG_APACHE_KAFKA_LEVELKafka log level.warnorg.apache.kafka
LOG4J2_IO_KCACHE_LEVELKcache log level (our persistence library).warnio.kcache
LOG4J2_IO_VERTX_LEVELVertx log level (our HTTP API framework).warnio.vertx
LOG4J2_IO_NETTY_LEVELNetty log level (our network framework).errorio.netty
LOG4J2_IO_MICROMETER_LEVELMicrometer log level (our metrics framework).errorio.micrometer
LOG4J2_ROOT_LEVELRoot logging level (applies to anything else that hasn't been listed above).info(root)

Product analytics

Environment variableDescriptionDefault value
GATEWAY_FEATURE_FLAGS_ANALYTICSConduktor collects basic user analytics to understand product usage and enhance product development and improvement, such as a Gateway Started event. This is not based on any of the underlying Kafka data which is never sent to Conduktor.true

Data Quality topic configs

Environment variableDescriptionDefault value
GATEWAY_DATA_QUALITY_TOPICTarget topic name_conduktor_${GATEWAY_CLUSTER_ID}_data_quality_violation
GATEWAY_DATA_QUALITY_TOPIC_REPLICATION_FACTORReplication factor to be used when creating the data quality topic, defaults to the one defined in your cluster settingscluster default
GATEWAY_DATA_QUALITY_TOPIC_PARTITIONSNumber of partitions to be used when creating the data quality topic, defaults to the one defined in your cluster settingscluster default
GATEWAY_DATA_QUALITY_TOPIC_RETENTION_HOURRetention period (in hours) to be used when creating the data quality topic168 (7 days)