To configure Conduktor Gateway, we recommend setting up environment variables. They can be set in the Gateway container or taken from a file. To make sure the values were set correctly, check the startup logs.
You can mount a file with the key-value pairs into the container and provide its path by setting the environment variable GATEWAY_ENV_FILE. Note these variables should be exported by the file as they are injected in the wrapper that starts the Gateway process.
You’ll get a confirmation in the logs: Sourcing environment variables from $GATEWAY_ENV_FILE (or a warning if the file isn’t found: Warning: GATEWAY_ENV_FILE is set but the file does not exist or is not readable.).
The hostname returned in the Gateway’s metadata for clients to connect to.
Your hostname
GATEWAY_ROUTING_MECHANISM
Defines the routing method: port for port routing, host for SNI routing.
port
GATEWAY_PORT_START
The first port the Gateway listens on.
6969
GATEWAY_MIN_BROKERID
The broker ID associated with the first port (GATEWAY_PORT_START). Should match the lowest broker.id (or node.id) in the Kafka cluster.
0
GATEWAY_BIND_HOST
The network interface the Gateway binds to.
0.0.0.0
Port routing specific
GATEWAY_PORT_COUNT
The total number of ports used by Gateway.
(maxBrokerId - minBrokerId) + 3
SNI routing specific
GATEWAY_ADVERTISED_SNI_PORT
The port returned in the Gateway’s metadata for clients to connect to when using SNI routing.
GATEWAY_PORT_START
GATEWAY_ADVERTISED_HOST_PREFIX
Configures the advertised broker names.
broker
GATEWAY_SECURITY_MODE
Defines where authentication takes place, Gateway or Kafka. Valid values are: GATEWAY_MANAGED, KAFKA_MANAGED.
The default value depends on combination of GATEWAY_SECURITY_PROTOCOL and KAFKA_SECURITY_PROTOCOL. See security defaults
GATEWAY_SECURITY_PROTOCOL
Defines the security protocol that clients should use to connect to Gateway. **Has to be set to SSL or SASL_SSL when in GATEWAY_MANAGED security mode, or SASL_SSL when in KAFKA_MANAGED security mode, for SNI routing.
The default value depends on combination of GATEWAY_SECURITY_PROTOCOL and KAFKA_SECURITY_PROTOCOL. See security defaults
GATEWAY_SNI_HOST_SEPARATOR
The separator used to construct returned metadata.
-
As of Gateway 3.10.0, the DELEGATED_XXX security protocols have been deprecated in favour of an additional environment variable, GATEWAY_SECURITY_MODE.The DELEGATED values remain supported for backward compatibility but are no longer recommended for new configurations.If you’re using DELEGATED security protocols, check out the security mode migration guide before proceeding.
Conduktor Gateway’s connection to Kafka is configured by the KAFKA_ environment variables.When translating Kafka’s properties, use upper case instead and replace the . with _. For example:When defining Gateway’s Kafka property bootstrap.servers, declare it as the environment variable KAFKA_BOOTSTRAP_SERVERS. Any variable prefixed with KAFKA_ will be treated as a connection parameter by Gateway.
Define where authentication takes place, Gateway or Kafka. Valid values are: GATEWAY_MANAGED, KAFKA_MANAGED. Note that KAFKA_MANAGED mode is incompatible with PLAINTEXT or SSL Gateway security protocols.
The default value depends on combination of GATEWAY_SECURITY_PROTOCOL and KAFKA_SECURITY_PROTOCOL. See security defaults
GATEWAY_SECURITY_PROTOCOL
The type of authentication clients should use to connect to Gateway. Valid values are: PLAINTEXT, SASL_PLAINTEXT, SASL_SSL, SSL.
The default value depends on combination of GATEWAY_SECURITY_MODE and KAFKA_SECURITY_PROTOCOL. See security defaults
GATEWAY_FEATURE_FLAGS_MANDATORY_VCLUSTER
If no virtual cluster was detected, the user then automatically falls back into the transparent virtual cluster called passthrough. If set to true, reject authentication if the principal (the identifier the authentication process provides) is not mapped to a named virtual cluster with a service account. This must be set to true when using the Partner Zone feature of Conduktor Exchange.
false
GATEWAY_AUTO_CREATE_TOPICS_ENABLED
Enable auto-create topics feature. When enabled, topics can be automatically created when producing or consuming through Gateway, leveraging the Kafka property auto.create.topics.enable. Authorization: When enabled, users require either CLUSTER resource with CREATE permission (allows creating any topic) or TOPIC resource with CREATE permission for specific topics. Warning: this feature doesn’t support concentrated topics. When auto-create topics is enabled, topics that would normally be concentrated not be, they’ll simply be created as regular physical topics instead. Take caution when enabling this setting.
false
GATEWAY_ACL_ENABLED
Enable/disable ACL support on the Gateway transparent virtual cluster (passthrough) only.
When GATEWAY SECURITY MODE is set to - GATEWAY_MANAGED : true. - KAFKA_MANAGED: false. - Unset and undetermined: false.
GATEWAY_SUPER_USERS
Semicolon-separated (;) list of service accounts that will be super users on Gateway (excluding virtual clusters). Example: alice;bob.
Usernames from GATEWAY_ADMIN_API_USERS
GATEWAY_ACL_STORE_ENABLED
Obsolete, use the VirtualCluster resource. Enable/disable ACLs support for Virtual Clusters only.
false
GATEWAY_AUTHENTICATION_CONNECTION_MAX_REAUTH_MS
Force the client re-authentication after this amount of time. If set to 0, we never force the client to re-authenticate until the next connection
0
GATEWAY_SHUTDOWN_DELAY_BETWEEN_BROKERS_MS
The pause between disconnection of broker clients during shutdown process. Set to 1000 or higher for librdkafka clients.
0
During shutdown, Gateway closes client connections in controlled manner to simulate rolling Kafka broker restart, with GATEWAY_SHUTDOWN_DELAY_BETWEEN_BROKERS_MS pause between brokers. For librdkafka clients, set GATEWAY_SHUTDOWN_DELAY_BETWEEN_BROKERS_MS to 1000 or higher to prevent them from crashing during shutdown because of ALL_BROKERS_DOWN error.
As of Gateway 3.10.0, the DELEGATED_SASL_SSL security protocol has been deprecated in favour of an additional environment variable GATEWAY_SECURITY_MODE.The default behavior of GATEWAY_ACL_ENABLED has changed, instead derived from security mode when left unset.The DELEGATED values remain supported for backward compatibility but are no longer recommended for new configurations.If you’re using DELEGATED security protocols, check out the migration guide before proceeding.
This decision tree explains how Gateway determines default values for GATEWAY_SECURITY_PROTOCOL and GATEWAY_SECURITY_MODE when one or both are not explicitly set.
NONE will not request client authentication, OPTIONAL will request client authentication, REQUIRE will require client authentication
NONE
GATEWAY_SSL_PRINCIPAL_MAPPING_RULES
mTLS leverages SSL mutual authentication to identify a Kafka client. Principal for mTLS connection can be detected from the subject certificate using the same feature as in Apache Kafka, the SSL principal mapping
Extracts the subject
GATEWAY_SSL_ENABLED_PROTOCOLS
Comma-separated list of TLS protocol versions; restricts which protocols are offered during SSL/TLS handshake. If not provided it will fall back to the default protocols of security provider configured by GATEWAY_SECURITY_PROVIDER.
GATEWAY_SSL_CIPHER_SUITES
Comma-separated list of cipher suites; restricts cryptographic algorithms that are offered during SSL/TLS handshake. If not provided it will fall back to the default ciphers of security provider configured by GATEWAY_SECURITY_PROVIDER.
Some of these definitions (e.g. SASL_OAUTHBEARER_JWKS_ENDPOINT_REFRESH) are taken from Kafka documentation .
Environment variable
Description
GATEWAY_OAUTH_JWKS_URL
The OAuth/OIDC provider URL from which the provider’s JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based.
GATEWAY_OAUTH_EXPECTED_ISSUER
The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth iss claim and if this value is set, the broker will match it exactly against what is in the JWT’s iss claim. If there’s no match, the broker will reject the JWT and authentication will fail
GATEWAY_OAUTH_EXPECTED_AUDIENCES
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth aud claim and if this value is set, the broker will match the value from JWT’s aud claim to see if there is an exact match. If there’s no match, the broker will reject the JWT and authentication will fail.
GATEWAY_OAUTH_JWKS_REFRESH
The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
GATEWAY_OAUTH_JWKS_RETRY
The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts, up to a maximum wait length specified by sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms.
GATEWAY_OAUTH_JWKS_MAX_RETRY
The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts, up to a maximum wait length specified by sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms.
GATEWAY_OAUTH_SCOPE_CLAIM_NAME
The OAuth claim for the scope is often named scope but this (optional) setting can provide a different name to use for the scope included in the JWT payload’s claims, if the OAuth/OIDC provider uses a different name for that claim.
GATEWAY_OAUTH_SUB_CLAIM_NAME
The OAuth claim for the subject is often named sub, but this (optional) setting can provide a different name to use for the subject included in the JWT payload’s claims, if the OAuth/OIDC provider uses a different name for that claim.
GATEWAY_OAUTH_USE_CC_POOL_ID
Set to true to use the Confluent Cloud pool ID as the principal name. This is useful for Confluent Cloud users in Delegated mode who want to use the pool ID as the principal name instead of the sub claim.
Base64 encoded value of 256bits long (e.g. openssl rand -base64 32). If using SASL_PLAIN or SASL_SSL, you have the ability to create local service accounts on Gateway. These service accounts will have credentials generated by Gateway based on the GATEWAY_USER_POOL_SECRET_KEY.
No default value is provided. You must provide this value for all deployments
Specify your security provider. It can be: DEFAULT (from your JRE), BOUNCY_CASTLE, BOUNCY_CASTLE_FIPS and CONSCRYPT. Please note that CONSCRYPT doesn’t support Mac OS with aarch64.
DEFAULT
GATEWAY_USER_POOL_SERVICE_ACCOUNT_REQUIRED
If true, verify the existence of user mapping for the service account when the user connects in Gateway Managed SASL/PLAIN mode. Note that this variable is deprecated in Gateway v3.9.0. The behaviour is the same as if it was set to true.
Setting up your Kafka clusters for failover is similar to the standard setup, but you need to provide two sets of properties: one for your main cluster and one for your failover cluster.You can define these properties as environment variables or load a cluster configuration file.
Environment variable
Description
GATEWAY_BACKEND_KAFKA_SELECTOR
Indicates the use of a configuration file and provides its path, e.g.: 'file: { path: /cluster-config.yaml }'.
KAFKA_FAILOVER_GATEWAY_ROLES
To turn Gateway into failover mode, set this to failover.
As Gateway is stateless, it uses Kafka topics to store its internal state. Use the following environment variables to configure these internal topics.If missing, Gateway will automatically create the topics (if it has the permission to do so). You can also create the topics independently of Gateway, just make sure they are configured as described below.
Firstly, there are some general configuration settings for Gateway internal state management which apply to all used topics.
Environment variable
Description
Default value
GATEWAY_GROUP_ID
Set a consumer group name which will be used by Gateway to consume the internal license topic. This consumer group will be used by Gateways from the same cluster to recognize each other.
conduktor_${GATEWAY_CLUSTER_ID}
GATEWAY_STORE_TTL_MS
Time between full refresh.
604800000
GATEWAY_TOPIC_STORE_KCACHE_REPLICATION_FACTOR
Defaults to the value defined in your cluster settings.
The most important setting is log.cleanup.policy which defines the clean up policy for the topic. Most of the topics used by Gateway are compacted, but some use time-based retention. If this isn’t set up properly, Gateway will throw an error on startup. Set the following:
log.cleanup.policy=compact for compaction
log.cleanup.policy=delete for time based retention
If Gateway creates the topics for you, it will set the right values.The second vital setting is the replication factor. This should be set to at least 3 in production environments to ensure that the data is safe (Gateway will warn you on startup, if it’s set to less than three). When creating topics, Gateway uses the default value for your Kafka brokers for this setting.For partition count, most of the topics are low volume and can operate well with only a single partition. This isn’t enforced (Gateway will work with multi partition topics for internal state), however there is no need to have more than one partition.The exception to this is the audit log topic which can have a lot of events written to it, if enabled for a busy cluster. We recommend starting with 3 partitions for audit logs (this doesn’t affect Gateway performance as it is a writer, not a reader), but will impact any other consumers you may run reading from it.
The GATEWAY_FEATURE_FLAGS_BLOCK_UNSUPPORTED_APIS environment variable controls how Gateway handles Kafka APIs that aren’t explicitly supported.Two modes:
Permissive mode (false, default): Gateway allows unsupported APIs to pass through - maintains backward compatibility
Restrictive mode (true): Gateway blocks unsupported APIs for enhanced security
When to use each:
Use permissive (default) when upgrading or if you have legacy applications
Use restrictive when you want maximum security and only allow known, supported APIs
Impact of each mode:Permissive mode (false):
✅ Backward compatibility: Legacy applications continue to work
✅ Smooth upgrades: No breaking changes during Gateway updates
Unsupported APIs are passed through to Kafka without Gateway processing
Critical security risk: KIP-932 Share Group APIs bypass all ACL checks completely
Configuration exposure: KIP-1000 APIs allow access to sensitive configuration without proper permissions
Internal API access: Internal/Admin APIs meant for broker-to-broker communication may be accessible to clients
Topic mapping and other Gateway features may not work correctly
Potential for unauthorized access to restricted resources and information disclosure
When GATEWAY_FEATURE_FLAGS_BLOCK_UNSUPPORTED_APIS=true (restrictive mode):
All unsupported APIs are blocked at the Gateway level
Prevents security vulnerabilities: Blocks KIP-932 and KIP-1000 APIs that bypass ACL checks
Prevents internal API access: Blocks internal/Admin APIs that should not be accessible to clients
Enhanced security through explicit allow-list approach
Applications must use only Gateway-supported APIs
Example of problematic situation:ListClientMetricsResources API bypasses ACL checks:When GATEWAY_FEATURE_FLAGS_BLOCK_UNSUPPORTED_APIS=false (permissive mode), clients can call the ListClientMetricsResources API without proper authorization. This API is designed to list all client metrics configuration resources in the cluster.Attack scenario:
Unauthorized client connects to Gateway without DESCRIBE_CONFIGS permission on CLUSTER resource
Client callsListClientMetricsResources API directly
Gateway forwards the request to Kafka without ACL validation (since it’s not in the supported APIs list)
Kafka responds with complete list of client metrics configuration resources
Client receives sensitive configuration information including:
All configured client metrics resources
Internal cluster configuration details
Resource names and configurations that should be protected
Security impact:
Information disclosure: Unauthorized access to sensitive cluster configuration
ACL bypass: Complete circumvention of Gateway’s permission system
Compliance violation: Access to restricted data without proper authorization
Reconnaissance: Attackers can enumerate cluster resources for further attacks
Why this happens:
The ListClientMetricsResources API is not included in Gateway’s supported APIs list, so when permissive mode is enabled, it bypasses all Gateway security controls and is forwarded directly to Kafka without ACL validation.
This setting only affects unsupported APIs. APIs that are explicitly blocked for security or stability reasons (like consumer group management) will always be blocked, regardless of this setting.
These timeout thresholds are used by Gateway and typically don’t need modification since they match Apache Kafka’s defaults.If your Kafka brokers or clients use non-default timeout values, you may want to adjust these.
Environment variable
Description
Default value
GATEWAY_INFLIGHT_REQUEST_EXPIRY_MS
Timeout for in-flight requests. If Kafka doesn’t respond before this delay, Gateway will respond with a request timeout to the client. Default is 300000. It has to exceed the consumer’s max.poll.interval.ms and client’s request.timeout.ms because JOIN_GROUP or SYNC_GROUP requests can take up to that duration per KIP-62.
330000
GATEWAY_UPSTREAM_MAX_IDLE_TIME_MS
Maximum time Gateway connections can remain idle before being closed, in milliseconds. Default is 600000. It has to exceed the Kafka broker’s connections.max.idle.ms and should be set higher than GATEWAY_INFLIGHT_REQUEST_EXPIRY_MS.
Conduktor collects basic user analytics to understand product usage and enhance product development and improvement, such as a Gateway Started event. This is not based on any of the underlying Kafka data which is never sent to Conduktor.