Skip to main content
Quick navigation

Environment Variables

Set environment variables

Configuring environment variables is the recommended method for setting up Conduktor Gateway. They can be set in the Gateway container, or taken from a file. You can make sure the values have been properly set by checking the startup logs.

In the container

For Docker

You can set them during the docker-run command with -e or --env:

docker run -d \
-e KAFKA_BOOTSTRAP_SERVERS=kafka1:9092,kafka2:9092 \
-e KAFKA_SECURITY_PROTOCOL=SASL_PLAINTEXT \
-e KAFKA_SASL_MECHANISM=PLAIN \
-e KAFKA_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username='usr' password='pwd';" \
-p 6969:6969 \
conduktor/conduktor-gateway:latest

Or in a docker-compose.yaml:

services:
conduktor-gateway:
image: conduktor/conduktor-gateway:latest
ports:
- 6969:6969
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka1:9092,kafka2:9092
KAFKA_SECURITY_PROTOCOL: SASL_PLAINTEXT
KAFKA_SASL_MECHANISM: PLAIN
KAFKA_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username='usr' password='pwd';

For Kubernetes

You can set them in the values.yaml of our Helm chart:

gateway:
env:
KAFKA_BOOTSTRAP_SERVERS: kafka1:9092,kafka2:9092
KAFKA_SECURITY_PROTOCOL: SASL_PLAINTEXT
KAFKA_SASL_MECHANISM: PLAIN
KAFKA_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username='usr' password='pwd';

Using a file

You can mount a file that contains the key-value pairs into the container and provide its path by setting the environment variable GATEWAY_ENV_FILE.

Example
KAFKA_BOOTSTRAP_SERVERS=kafka1:9092,kafka2:9092
KAFKA_SECURITY_PROTOCOL=SASL_PLAINTEXT
KAFKA_SASL_MECHANISM=PLAIN
KAFKA_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username='usr' password='pwd';

You'll get a confirmation in the logs: Sourcing environment variables from $GATEWAY_ENV_FILE, or a warning if the file is not found: Warning: GATEWAY_ENV_FILE is set but the file does not exist or is not readable.

Networking

Port & SNI routing

Environment variableDescriptionDefault value
Common Properties
GATEWAY_ADVERTISED_HOSTThe hostname returned in the Gateway’s metadata for clients to connect to.Your hostname
GATEWAY_ROUTING_MECHANISMDefines the routing method: port for port routing, host for SNI routing.port
GATEWAY_PORT_STARTThe first port the Gateway listens on.6969
GATEWAY_MIN_BROKERIDThe broker ID associated with the first port (GATEWAY_PORT_START). Should match the lowest broker.id (or node.id) in the Kafka cluster.0
GATEWAY_BIND_HOSTThe network interface the Gateway binds to.0.0.0.0
Port routing specific
GATEWAY_PORT_COUNTThe total number of ports used by the Gateway.(maxBrokerId - minBrokerId) + 3
SNI routing specific
GATEWAY_ADVERTISED_SNI_PORTThe port returned in the Gateway’s metadata for clients to connect to when using SNI routing.GATEWAY_PORT_START
GATEWAY_ADVERTISED_HOST_PREFIXConfigures the advertised broker names.broker
GATEWAY_SECURITY_PROTOCOLDefines the security protocol clients should use to connect to the Gateway. Must be set to SSL, SASL_SSL, or DELEGATED_SASL_SSL for SNI routing.The default value depends on KAFKA_SECURITY_PROTOCOL.
GATEWAY_SNI_HOST_SEPARATORThe separator used to construct returned metadata.-

Load Balancing

Environment variableDescriptionDefault value
GATEWAY_CLUSTER_IDA unique identifier for a given Gateway cluster, this is used to establish Gateway cluster membership for load balancing.gateway
GATEWAY_FEATURE_FLAGS_INTERNAL_LOAD_BALANCINGWhether to use Conduktor Gateway's internal load balancer to balance connections between Gateway instances.true
GATEWAY_RACK_IDSimilar as broker.rack.

HTTP API

Environment variableDescriptionDefault value
GATEWAY_HTTP_PORTThe port on which the gateway will present the HTTP management API.8888
GATEWAY_SECURED_METRICSDetermines whether the HTTP management API requires authentication.true
GATEWAY_ADMIN_API_USERSUsers that can access the API. Note: Admin access is required for write operations. Setting admin: true grants read-only access.[{username: admin, password: conduktor, admin: true}]
HTTPS Configuration
GATEWAY_HTTPS_KEY_STORE_PATHEnables HTTPS and specifies the keystore to use for TLS connections.
GATEWAY_HTTPS_KEY_STORE_PASSWORDSets the password for the keystore used in HTTPS TLS connections.
GATEWAY_HTTPS_CLIENT_AUTHClient authentication configuration for mTLS. Possible values: NONE, REQUEST, REQUIRED.NONE
GATEWAY_HTTPS_TRUST_STORE_PATHSpecifies the truststore used for mTLS.
GATEWAY_HTTPS_TRUST_STORE_PASSWORDPassword for the truststore defined above.

Upstream Connection

Environment variableDescriptionDefault value
GATEWAY_UPSTREAM_CONNECTION_POOL_TYPEUpstream connection pool type. Possible values are NONE (no connection pool), ROUND_ROBIN (Round robin selected connection pool)NONE
GATEWAY_UPSTREAM_NUM_CONNECTIONThe number of connections between Conduktor Gateway and Kafka per upstream thread. Used only when ROUND_ROBIN is enabled.10

Licensing

Environment variableDescriptionDefault value
GATEWAY_LICENSE_KEYLicense keyNone

Connection from Gateway to Kafka

Conduktor Gateway's connection to Kafka are configured by the KAFKA_ environment variables.

When translating Kafka's properties, use upper case instead and replace the . with _.

For example;
When defining Gateway's Kafka property bootstrap.servers, declare it as the environment variable KAFKA_BOOTSTRAP_SERVERS.

Any variable prefixed with KAFKA_ will be treated as a connection parameter by Gateway.

You can find snippets for each security protocol on this page.

Connection from Clients to Gateway

Environment variableDescriptionDefault value
GATEWAY_SECURITY_PROTOCOLThe type of authentication clients should use to connect to the gateway, valid values are PLAINTEXT, SASL_PLAINTEXT, SASL_SSL, SSL, DELEGATED_SASL_PLAINTEXT and DELEGATED_SASL_SSL.The default value depends on KAFKA_SECURITY_PROTOCOL.
GATEWAY_FEATURE_FLAGS_MANDATORY_VCLUSTERIf no virtual cluster was detected, then user automatically falls back into the transparent virtual cluster, named passthrough. Reject authentication if set to true and vcluster is not configured for a principal.false
GATEWAY_ACL_ENABLEDEnable/Disable ACLs support on the Gateway transparent virtual cluster (passthrough) only.false
GATEWAY_SUPER_USERSComa-separated list of service accounts that will be super users on the Gateway (excluding Virtual Clusters).
Example: alice,bob. If not set, it falls back to the usernames defined in the GATEWAY_ADMIN_API_USERS.
Usernames from GATEWAY_ADMIN_API_USERS
GATEWAY_ACL_STORE_ENABLEDObsolete, use VirtualCluster resource now
Enable/Disable ACLs support for Virtual Clusters only.
false
GATEWAY_AUTHENTICATION_CONNECTION_MAX_REAUTH_MSForce the client reauthentication after this amount of time. If set to 0, we never force the client to reauthenticate0

SSL

See Client Authentication for details.

Environment variableDescriptionDefault value
Keystore
GATEWAY_SSL_KEY_STORE_PATHPath to a mounted keystore for SSL connections
GATEWAY_SSL_KEY_STORE_PASSWORDPassword for the keystore defined above
GATEWAY_SSL_KEY_PASSWORDPassword for the key contained in the store above
GATEWAY_SSL_KEY_TYPEjksor pkcs12jks
GATEWAY_SSL_UPDATE_CONTEXT_INTERVAL_MINUTESInterval in minutes to refresh SSL context5
Truststore (for mTLS)
GATEWAY_SSL_TRUST_STORE_PATHPath to a keystore for SSL connections
GATEWAY_SSL_TRUST_STORE_PASSWORDPassword for the truststore defined above
GATEWAY_SSL_TRUST_STORE_TYPEjks, pkcs12jks
GATEWAY_SSL_CLIENT_AUTHNONE will not request client authentication, OPTIONAL will request client authentication, REQUIRE will require client authenticationNONE
GATEWAY_SSL_PRINCIPAL_MAPPING_RULESmTLS leverages SSL mutual authentication to identify a Kafka client. Principal for mTLS connection can be detected from the subject certificate using the same feature as in Apache Kafka, the SSL principal mappingextracts the Subject

OAuthbearer

Some of these definitions are taken from the Kafka documentation, e.g. SASL_OAUTHBEARER_JWKS_ENDPOINT_REFRESH.

Environment variableDescription
GATEWAY_OAUTH_JWKS_URLThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based.
GATEWAY_OAUTH_EXPECTED_ISSUERThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth iss claim and if this value is set, the broker will match it exactly against what is in the JWT's iss claim. If there is no match, the broker will reject the JWT and authentication will fail
GATEWAY_OAUTH_EXPECTED_AUDIENCESThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth aud claim and if this value is set, the broker will match the value from JWT's aud claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.
GATEWAY_OAUTH_JWKS_REFRESHThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
GATEWAY_OAUTH_JWKS_RETRYThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting
GATEWAY_OAUTH_JWKS_MAX_RETRYThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting
GATEWAY_OAUTH_SCOPE_CLAIM_NAMEThe OAuth claim for the scope is often named scope, but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
GATEWAY_OAUTH_SUB_CLAIM_NAMEThe OAuth claim for the subject is often named sub, but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

Plain

See Client Authentication for details.

Environment variableDescriptionDefault value
GATEWAY_USER_POOL_SECRET_KEYIf using SASL_PLAIN or SASL_SSL, you have the ability to create local service accounts on the Gateway. These service accounts will have credentials generated by the Gateway based on the GATEWAY_USER_POOL_SECRET_KEY. This is why, for production deployment, we strongly recommend you change this value.A default value is used to sign tokens and must be changed.
GATEWAY_USER_POOL_SERVICE_ACCOUNT_REQUIREDIf true, verify the existence of user mapping for the service account when the user connects in Non-Delegated SASL/PLAIN mode.false

Security provider

Environment variableDescriptionDefault value
GATEWAY_SECURITY_PROVIDERSpecify your security provider, can be DEFAULT (from your JRE), BOUNCY_CASTLE, BOUNCY_CASTLE_FIPS and CONSCRYPT. Please note that CONSCRYPT does not support Mac OS with aarch64.DEFAULT

Cluster Switching / Failover

Setting up your Kafka clusters for failover is similar to the standard setup, but you need to provide two sets of properties: one for your main cluster and one for your failover cluster. You can define these properties as environment variables, or load a clusters configuration file if you prefer.

Environment variableDescription
GATEWAY_BACKEND_KAFKA_SELECTORIndicates the use of a configuration file and provides its path, e.g., 'file: { path: /cluster-config.yaml }'.
KAFKA_FAILOVER_GATEWAY_ROLESSet the Gateway into failover mode, set this to failover for this scenario.
Main Cluster
KAFKA_MAIN_BOOTSTRAP_SERVERSBootstrap server.
KAFKA_MAIN_SECURITY_PROTOCOLSecurity protocol.
KAFKA_MAIN_SASL_MECHANISMSASL mechanism.
KAFKA_MAIN_SASL_JAAS_CONFIGSASL jaas config.
Failover Cluster
KAFKA_FAILOVER_BOOTSTRAP_SERVERSBootstrap server.
KAFKA_FAILOVER_SECURITY_PROTOCOLSecurity protocol.
KAFKA_FAILOVER_SASL_MECHANISMSASL mechanism.
KAFKA_FAILOVER_SASL_JAAS_CONFIGSASL jaas config.

Internal topics

As the Gateway is stateless, it uses Kafka topics to store its internal state. The following environment variables can be used to configure these topics.

Internal State

To keep the Gateway instances stateless, internal state is stored in Kafka topics.

Environment variableDescriptionDefault value
GATEWAY_GROUP_IDSet the consumer group name used by Gateway to consume the internal license topic. This is thanks to this consumer group that the Gateways from the same Gateway cluster will recognize each other.conduktor_${GATEWAY_CLUSTER_ID}
GATEWAY_STORE_TTL_MSTime between full refresh.604800000
GATEWAY_TOPIC_STORE_KCACHE_REPLICATION_FACTORDefaults to the one defined in your cluster settings.-1

Topic Names

Environment variableDescriptionDefault value
GATEWAY_LICENSE_TOPICTopic where the license is stored._conduktor_${GATEWAY_CLUSTER_ID}_license
GATEWAY_TOPIC_MAPPINGS_TOPICTopic where the topics aliases are stored._conduktor_${GATEWAY_CLUSTER_ID}_topicmappings
GATEWAY_USER_MAPPINGS_TOPICTopic where the service accounts are stored._conduktor_${GATEWAY_CLUSTER_ID}_usermappings
GATEWAY_CONSUMER_OFFSETS_TOPICTopic where the offsets for concentrated topic consumption are stored._conduktor_${GATEWAY_CLUSTER_ID}_consumer_offsets
GATEWAY_INTERCEPTOR_CONFIGS_TOPICTopic where the deployed interceptors are stored._conduktor_${GATEWAY_CLUSTER_ID}_interceptor_configs
GATEWAY_ENCRYPTION_CONFIGS_TOPICTopic where the encryption configuration is stored, in specific cases._conduktor_${GATEWAY_CLUSTER_ID}_encryption_configs
GATEWAY_ACLS_TOPICTopic where the ACLs managed by the Gateway are stored._conduktor_${GATEWAY_CLUSTER_ID}_acls
GATEWAY_AUDIT_LOG_TOPICTopic where the Gateway audit log is stored._conduktor_${GATEWAY_CLUSTER_ID}_auditlogs
GATEWAY_VCLUSTERS_TOPICTopic where the virtual clusters are stored._conduktor_${GATEWAY_CLUSTER_ID}_vclusters
GATEWAY_GROUPS_TOPICTopic where the service account groups are stored._conduktor_${GATEWAY_CLUSTER_ID}_groups

Internal Setup

Threading

Environment variableDescriptionDefault value
GATEWAY_DOWNSTREAM_THREADThe number of threads dedicated to handling IO between clients and Conduktor Gatewaynumber of cores
GATEWAY_UPSTREAM_THREADThe number of threads dedicated to handling IO between Kafka and Conduktor Gatewaynumber of cores

Feature Flags

Environment variableDescriptionDefault value
GATEWAY_FEATURE_FLAGS_AUDITWhether or not to enable the audit featuretrue
GATEWAY_FEATURE_FLAGS_INTERNAL_LOAD_BALANCINGWhether or not to enable the Gateway internal load balancingtrue

Monitoring

Audit

Environment variableDescriptionDefault value
GATEWAY_AUDIT_LOG_CONFIG_SPEC_VERSIONVersion0.1.0
GATEWAY_AUDIT_LOG_SERVICE_BACKING_TOPICTarget topic name_auditLogs
GATEWAY_AUDIT_LOG_REPLICATION_FACTOR_OF_TOPICReplication factor to be used when creating the audit topic, defaults to the one defined in your cluster settings-1
GATEWAY_AUDIT_LOG_NUM_PARTITIONS_OF_TOPICNumber of partitions to be used when creating the audit topic, defaults to the one defined in your cluster settings-1
GATEWAY_AUDIT_LOG_KAFKA_Overrides Kafka Producer configuration for Audit Logs ie. GATEWAY_AUDIT_LOG_KAFKA_LINGER_MS=0

Logging

Environment variableDescriptionDefault valuePackage
LOG4J2_APPENDER_LAYOUTThe format to output console logging. Use json for json layout or pattern for pattern layoutpattern
LOG4J2_IO_CONDUKTOR_PROXY_NETWORK_LEVELLow-level networking, connection mapping, authentication, authorizationinfoio.conduktor.proxy.network
LOG4J2_IO_CONDUKTOR_UPSTREAM_THREAD_LEVELRequests processing and forwarding. At trace, log requests sentinfoio.conduktor.proxy.thread.UpstreamThread
LOG4J2_IO_CONDUKTOR_PROXY_REBUILDER_COMPONENTS_LEVELRequests and responses rewriting. Logs responses payload in debug (useful for checking METADATA)infoio.conduktor.proxy.rebuilder.components
LOG4J2_IO_CONDUKTOR_PROXY_SERVICE_LEVELVarious. Logs ACL checks and interceptor targettings at debug. Logs post-interceptor requests/response payload at traceinfoio.conduktor.proxy.service
LOG4J2_IO_CONDUKTOR_LEVELGet even more logs not covered by specific packagesinfoio.conduktor
LOG4J2_ORG_APACHE_KAFKA_LEVELKafka log levelwarnorg.apache.kafka
LOG4J2_IO_KCACHE_LEVELKcache log level (our persistence library)warnio.kcache
LOG4J2_IO_VERTX_LEVELVertx log level (our HTTP API framework)warnio.vertx
LOG4J2_IO_NETTY_LEVELNetty log level (our network framework)errorio.netty
LOG4J2_IO_MICROMETER_LEVELMicrometer log level (our metrics framework)errorio.micrometer
LOG4J2_ROOT_LEVELRoot logging level (applies to anything else that hasn't been listed above)info(root)

Product Analytics

Environment variableDescriptionDefault value
GATEWAY_FEATURE_FLAGS_ANALYTICSConduktor collects basic user analytics to understand product usage and enhance product development and improvement, such as a Gateway Started event. This is not based on any of the underlying Kafka data which is never sent to Conduktor.true