- Decide on how to configure your network.
- Define your load balancing requirements.
- Connect Gateway to Kafka.
- Configure Gateway to accept client connections.
- Decide whether you need Virtual Clusters.
1. Configure network
When configuring Conduktor Gateway for the first time, selecting the appropriate routing method is crucial for optimizing your Kafka proxy setup. Pick one of these solutions depending on your requirements: Choose port-based routing if your environment:- doesn’t require TLS encryption
- has flexible network port management capabilities
- prefers a simpler, straightforward configuration without DNS complexities
- requires TLS encrypted connections for secure communication
- faces challenges with managing multiple network ports
- seeks a scalable solution with easier management of routing through DNS and host names
Port-Based routing
In port-based Routing, each Kafka broker is assigned a unique port number and clients connect to the appropriate port to access the required broker. Gateway listens on as many ports as defined by the environment variableGATEWAY_PORT_COUNT
. The recommended number of ports in production is double the amount of the Kafka brokers (to cover the growth of the Kafka cluster).
Configure port-based routing using these environment variables:
GATEWAY_ADVERTISED_HOST
GATEWAY_PORT_START
GATEWAY_PORT_COUNT
GATEWAY_MIN_BROKERID
GATEWAY_MIN_BROKERID
. E.g., in a three broker cluster with IDs 1, 2, 3, GATEWAY_MIN_BROKERID
should be set to 1 and the default port count will be 5.
We recommend SNI routing when not using a sequential and stable broker IDs range to avoid excessive port assignment. E.g., a three broker cluster with IDs 100, 200, 300 with GATEWAY_MIN_BROKERID
=100 will default to 203 ports and would fail if broker ID 400 is introduced.
Host-based routing (SNI)
With host-based routing, Gateway listens on a single port and leverages Server Name Indication (SNI) (an extension to the TLS protocol), to route traffic based on the hostname specified in the TLS handshake to determine the target Kafka broker, requiring valid TLS certificates, proper DNS setup, and DNS resolution. Find out how to set up SNI routing.2. Define load balancing
To map the different Gateway nodes sharing the same cluster and your Kafka brokers, you can either use:- Gateway internal load balancing or
- an external load balancing
Internal load balancing
Gateway’s ability to distribute the client connections between the different Gateway nodes in the same cluster is what we refer to as internal load balancing. This is done automatically by Gateway and is the default behavior. To deploy multiple Gateway nodes as part of the same Gateway cluster, you have to set the sameGATEWAY_CLUSTER_ID
in each node’s deployment configuration. This configuration ensures that all nodes join the same consumer group, enabling them to consume the internal license topic from your Kafka cluster. This is how the nodes recognize each other as members of the same Gateway cluster.
When a client connects to one of the Gateway nodes to request metadata, the following process occurs (assuming GATEWAY_FEATURE_FLAGS_INTERNAL_LOAD_BALANCING
is set to true
, which is the default setting):
- The client chooses one of the bootstrap servers to ask for metadata.
- The Gateway node generates a mapping between its cluster nodes and the Kafka brokers.
- The Gateway node returns this mapping to the client.
- With the mapping in hand, the client can efficiently route its requests. For instance, if the client needs to produce to a partition where broker 3 is the leader, it knows to forward the request to Gateway 2 on port 9094.

If you have specified a
GATEWAY_RACK_ID
, then the mapping will take this into consideration and a Gateway node in the same rack as the Kafka broker will be assigned.
Internal load balancing limitations
In a Kubernetes environment, your ingress must point at a single service, which could be an external load balancer as detailed below.External load balancing
Alternatively, you can disable the internal load balancing by settingGATEWAY_FEATURE_FLAGS_INTERNAL_LOAD_BALANCING: false
.
In this case, you would deploy your own load balancer, such as HAProxy, to manage traffic distribution. This would allow you to configure the stickiness of the load balancer as required.
Here’s an example where:
- All client requests are directed to the external load balancer which acts as the entry point to your Gateway cluster.
- The load balancer forwards each request to one of the Gateway nodes, regardless of the port.
- The selected Gateway node, which knows which broker is the leader of each partition, forwards the request to the appropriate Kafka broker.

When using an external load balancer, you must configure the
GATEWAY_ADVERTISED_LISTENER
of the Gateway nodes to the Load Balancer’s hostname. If this isn’t done, applications will attempt to connect directly to Gateway, bypassing the Load Balancer.External load balancing limitations
This requires you to handle load balancing manually, as you won’t have the advantage of the automatic load balancing offered by Gateway’s internal load balancing feature.3. Connect Gateway to Kafka

KAFKA_
are mapped to configuration properties for connecting Gateway to the Kafka cluster.
As Gateway is based on the Java-based Kafka-clients, it supports all configuration properties that Java-clients do.
Kafka configuration properties are mapped to Gateway environment variables as follows:
- Add a
KAFKA_
prefix - Replace each dot,
.
, with an underscore,_
- Convert to uppercase
bootstrap.servers
is set by the KAFKA_BOOTSTRAP_SERVERS
environment variable.
Supported protocols
You can use all the Kafka security protocols to authenticate Gateway to the Kafka cluster;PLAINTEXT
, SASL_PLAINTEXT
, SASL_SSL
and SSL
.
These can be used with all SASL mechanisms supported by Apache Kafka: PLAIN
, SCRAM-SHA
, OAuthBearer
, Kerberos
etc. In addition, we support IAM authentication for AWS MSK clusters.
In the following examples, we provide blocks of environment variables which can be provided to Gateway, e.g. in a docker-compose file, or a helm
deployment.
Information which should be customized is enclosed by <
and >
.
PLAINTEXT
Kafka cluster without authentication or encryption in transit,PLAINTEXT
.
In this case you just need the bootstrap servers:
SSL
mTLS
Kafka cluster with mTLS client authentication.SASL_PLAINTEXT
Kafka cluster with SASL_PLAINTEXT security protocol but no encryption in transit, supporting the following SASL_MECHANISMs.SASL PLAIN
SASL SCRAM
OAuthbearer
SASL_SSL
Kafka cluster that uses SASL for authentication and TLS (formerly SSL) for encryption in transit.PLAIN
Kafka cluster with SASL_SSL and PLAIN SASL mechanism.Confluent Cloud with API key/secret
This example can be seen as a special case of the above.GATEWAY_SECURITY_PROTOCOL
to DELEGATED_SASL_PLAINTEXT
. Clients will then be able to authenticate using their own API keys/secrets.
SASL SCRAM
SASL GSSAPI (Kerberos)
OAuthbearer
AWS MSK cluster with IAM
Service account and ACL requirements
Depending on the client to Gateway authentication method you choose, the service account used to connect Gateway might need different ACLs to operate properly.Delegated authentication
In delegated authentication, the credentials provided to establish the connection between the client and Gateway are the same used from Gateway to the backing Kafka. As a result, the client will inherit the ACLs of the service account configured on the backing cluster. On top of that, Gateway needs its own service account with the following ACLs to operate correctly:Read
on internal topics and they should exist- Describe consumer group for internal topic
- Describe on cluster
- Describe topics for alias topics creation
Non-delegated
In non-delegated authentication (Local, Oauth or mTLS), the connection is using Gateway’s service account to connect to the backing Kafka. This service account must have all the necessary ACLs to perform not only these Gateway operations:Read
on internal topics and they should exist- Describe consumer group for internal topic
- Describe on cluster
- Describe topics for alias topics creation but also all the permissions necessary to serve all Gateway users.
GATEWAY_ACL_STORE_ENABLED=true
and then you can use AdminClient to maintain ACLs with any service account declared in GATEWAY_ADMIN_API_USERS
.
4. Configure Gateway to accept client connections

Principal
that represents the authenticated identity of the Kafka client.
We can split this authentication and security configuration into two aspects:
- Security protocol: defines how a Kafka client and Gateway broker should communicate and secure the connection.
- Authentication mechanism: defines how a client can authenticate itself when opening the connection.
- PLAINTEXT: Brokers don’t need client authentication; all communication is exchanged without network security.
- SSL: With SSL-only clients don’t need any client authentication but communication between the client and Gateway broker will be encrypted.
- mTLS: This security protocol is not originally intended to provide authentication, but you can use the mTLS option below to enable an authentication. mTLS leverages SSL mutual authentication to identify a Kafka client.
Principal
for mTLS connection can be detected from the subject certificate using the same feature as in Apache Kafka, the SSL principal mapping. - SASL PLAINTEXT: Brokers don’t need any client authentication and all communication is exchanged without any network security.
- SASL SSL: Authentication from the client is mandatory against Gateway and communication will be encrypted using TLS.
- DELEGATED_SASL_PLAINTEXT: Authentication from the client is mandatory but will be forwarded to Kafka for checking. Gateway will intercept exchanged authentication data to detect authenticated principals:
- All communication between the client and gateway broker is exchanged without any network security.
- All credentials are managed by your backend kafka, we only provide authorization on the Gateway side based on the exchanged principal.
Clients ⟶ GW transit in plaintext | Clients ⟶ GW transit is encrypted | |
---|---|---|
Anonymous access only | Security protocol: PLAINTEXT Authentication mechanism: None | Security protocol: SSL Authentication mechanism: None |
Credentials managed by Gateway | Security protocol: SASL_PLAINTEXT Authentication mechanism: PLAIN | Security protocol: SASL_SSL Authentication mechanism: PLAIN |
Gateway configured with OAuth | Security protocol: SASL_PLAINTEXT Authentication mechanism: OAUTHBEARER | Security protocol: SASL_SSL Authentication mechanism: OAUTHBEARER |
Clients are identified by certificates (mTLS) | Not possible (mTLS means encryption) | Security protocol: SSL Authentication mechanism: MTLS |
Credentials managed by Kafka | Security protocol: DELEGATED_SASL_PLAINTEXT Authentication mechanism: PLAIN , SCRAM-SHA-256 , SCRAM-SHA-512 , OAUTHBEARER orAWS_MSK_IAM | Security protocol: DELEGATED_SASL_SSL Authentication mechanism: PLAIN , SCRAM-SHA-256 , SCRAM-SHA-512 , OAUTHBEARER orAWS_MSK_IAM |
Security protocol
The Gateway broker security scheme is defined by theGATEWAY_SECURITY_PROTOCOL
configuration.
Note that you don’t set an authentication mechanism on the client to Gateway side of the proxy, i.e. GATEWAY_SASL_MECHANISM
does not exist and is never configured by the user. Instead, Gateway will try to authenticate the client as it presents itself.
For example, if a client is using OAUTHBEARER
, Gateway will use the OAuth configuration to try authenticate it. If a client arrives using PLAIN
, Gateway will try use either the SSL configuration or validate the token itself, depending on the security protocol.
In addition to all the security protocols that Apache Kafka supports, Gateway adds two new protocols:DELEGATED_SASL_PLAINTEXT
and DELEGATED_SASL_SSL
for delegating to Kafka.
PLAINTEXT
There is no client authentication to Gateway and all communication is exchanged without any network security. Gateway configuration:SSL
With SSL only, there is no client authentication, but communication between the client and Gateway broker will be encrypted. Gateway configuration:mTLS
Mutual TLS leverages client side certificates to authenticate a Kafka client.Principal
for an mTLS connection can be detected from the subject of the certificate using the same feature as Apache Kafka, the SSL principal mapping .
Gateway configuration:
SASL_PLAINTEXT
Authentication from the client is mandatory against Gateway but all communications are exchanged without any network security. Gateway supports Plain and OAuthbearer SASL mechanisms.Plain
Plain mechanism uses Username/Password credentials to authenticate credentials against Gateway. Plain credentials take the form of a JWT token, these are managed in Gateway using the Admin (HTTP) API. See below for the creation of tokens. Gateway configuration:GATEWAY_USER_POOL_SECRET_KEY
has to be set to a random base64 encoded value of 256bits long to ensure that tokens aren’t forged. For example: openssl rand -base64 32
. Otherwise, a default value for signing tokens will be used.
Client configuration:
- Create the service account, the username
- Generate a token for the service account, the password
OAuthbearer
Oauthbearer uses a OAuth2/OIDC security provider to authenticate a token in Gateway. The Oauth credentials base is managed in the configured provider. This mechanism will also allow you to verify some claims from your OIDC provider (audience
and issuer
).
Gateway configuration:
SASL_SSL
Authentication from client is mandatory against Gateway and communication will be encrypted using TLS. Supported authentication mechanisms:- PLAIN
- OAUTHBEARER
Plain
Plain mechanism use Username/Password credentials to authenticate credentials against Gateway. Plain credentials are managed in Gateway using the HTTP API. Gateway configuration:GATEWAY_USER_POOL_SECRET_KEY
to a random value to ensure that tokens cannot be forged. Otherwise, a default value for signing tokens will be used.
Client configuration:
OAuthbearer
Oauthbearer uses a OAuth2/OIDC security provider to authenticate a token in Gateway. The Oauth credentials base is managed in the configured provider. This mechanism will also allow you to verify some claims from your OIDC provider (audience
and issuer
).
Gateway configuration:
DELEGATED_SASL_PLAINTEXT
Authentication from client is mandatory but will be forwarded to Kafka for checking. Gateway will intercept exchanged authentication data to detect an authenticated principal:- All communication between the client and Gateway broker are exchanged without any network security.
- All credentials are managed by your backing Kafka, we only provide authorization on the Gateway side based on the exchanged principal.
- PLAIN
- SCRAM-SHA-256
- SCRAM-SHA-512
- OAUTHBEARER
- AWS_MSK_IAM
DELEGATED_SASL_SSL
Authentication from the client is mandatory but will be forwarded to Kafka. Gateway will intercept exchanged authentication data to detect an authenticated principal:- All communication between the client and Gateway broker will be encrypted using TLS.
- All credentials are managed by your backing Kafka, we only provide Authorization on the Gateway side based on the exchanged principal.
- PLAIN
- SCRAM-SHA-256
- SCRAM-SHA-512
- OAUTHBEARER
- AWS_MSK_IAM
Principal resolver
When using Confluent Cloud authentication with delegated authentication, Gateway supports automatically resolving API keys to their associated service account. This feature enhances security and improves usability by working with the service account principals instead of raw API keys. See the principal resolver environment variables. Gateway configuration using environment variables:Authentication flow
Automatic security protocol detection (default behavior)
If you don’t specify a security protocol, Gateway will attempt to detect it on startup based on the Kafka configuration. If there’s also no security protocol on the backing Kafka cluster, then we set the security protocol toPLAINTEXT
by default.
Here’s our mapping from the Kafka cluster’s defined protocol:
Kafka cluster security protocol | Gateway cluster inferred security protocol |
---|---|
SASL_SSL | DELEGATED_SASL_SSL |
SASL_PLAINTEXT | DELEGATED_SASL_PLAINTEXT |
SSL | SSL |
PLAINTEXT | PLAINTEXT |
Re-authentication support
We support Apache Kafka re-authentication as Kafka brokers. Find out more about KIP-368.5. Decide on Virtual Clusters
A Virtual Cluster in Conduktor Gateway is a logical representation of a Kafka cluster. This allows you to create multiple virtual clusters while maintaining a single physical Kafka cluster, enabling the simulation of multiple Kafka environments on a single physical infrastructure.
Virtual Clusters are entirely optional. If you choose to not configure any, Conduktor Gateway will act as a transparent proxy for your backing Kafka Cluster. This is the default mode and all topics/resources will be visible and accessible as usual, without any additional configuration.
Virtual cluster benefits
Flexibility and scalability: Virtual Clusters provide the flexibility to simulate multiple independent Kafka clusters without the need for additional physical resources. This is particularly useful for environments where different teams or applications require separate Kafka instances but maintaining multiple physical clusters would be cost-prohibitive or complex. Isolation and multitenancy: By using Virtual Clusters, you can ensure isolation between different logical clusters, similar to enabling multitenancy in Kafka. Each Virtual Cluster can have its own set of topics and consumer groups, and these are managed independently even though they reside on the same physical cluster. Resource efficiency: Instead of deploying and managing multiple physical clusters, which can be resource-intensive and expensive, Virtual Clusters allow you to maximize the utilization of a single physical Kafka cluster. This leads to better resource management and operational efficiency.Example
When you create a Virtual Cluster in Conduktor Gateway, it prefixes all resources (such as topics and consumer groups) associated with that Virtual Cluster on the backing physical Kafka cluster. This prefixing ensures that there’s no overlap or conflict between resources belonging to different Virtual Clusters, thereby maintaining their isolation. In the example below, we assume a topicorder
has been created on Virtual Cluster vc-alice
. Let’s see how other Virtual Clusters and Backing cluster perceive this:
Configure Gateway for failover
In a disaster recovery or business continuity scenario, we want to be able to switch clients from one Kafka cluster (the primary) to another one (the secondary) without having to reconfigure the clients. Reconfiguring clients would at least involve changing the bootstrap servers of the clients, forcing the clients to refresh the metadata and retry all messages in flight. It might also involve distributing new credentials to the clients. For example, API keys and secrets in Confluent Cloud are tied to a specific cluster. Other Kafka providers might have different restrictions. Essentially, this implies that the central operations/Kafka team (who would be responsible for initiating the failover-process) would have knowledge about all clients, which in practice is not feasible. The failover capability of Gateway solves this by redirecting client connections from the primary to the secondary cluster. This can be initiated at a central location using the Gateway HTTP API without having to reconfigure/restart each Kafka client individually.Prerequisites
Data replication is already in place
Gateway does not currently provide any mechanism to replicate already written data from the primary to the secondary cluster. Therefore, to make use of our solution, you should already have this mechanism in place. Common solutions for this include:- MirrorMaker 2
- Confluent Replicator
- Confluent Cluster Linking
Kafka client configuration
No specific client configuration is necessary, besides ensuring that clients have configured enough retries (or that thedelivery.timeout.ms
for JVM-based clients) setting is large enough to cover the time necessary for the operations team to discover failure of the primary cluster and initiate a failover procedure. Especially for JVM-based clients, the default delivery timeout of 2 minutes might be too short.
System requirements
- Gateway version
3.3.0
+ - Kafka brokers version
2.8.2
+
How it works
Conduktor Gateway acts as a ‘hot-switch’ to the secondary Kafka cluster, eliminating the need to change any client configurations in a disaster scenario. This is achievable because Gateway de-couples authentication between clients and the backing Kakfa cluster(s). Note that to initiate failover, it must be triggered through an API request to every Gateway instance. The Conduktor team can support you in finding the best solution for initiating failover, depending on your deployment specifities.
Set up Gateway
To set up Gateway for failover, you should configure the primary and secondary clusters along with their configuration properties. This can be achieved through a cluster-config file, or through environment variables.Configuring through a cluster-config file
Specify your primary and secondary cluster configurations, along with agateway.roles
entry to mark the failover cluster - note that the API keys differ in the Confluent Cloud example below:
GATEWAY_BACKEND_KAFKA_SELECTOR
:
Configuring through environment variables
Alternatively, you can configure your primary and secondary cluster through environment variables:Initiating failover
To initiate failing over from the primary to the secondary cluster, the following request must be made to all Gateway instances:Switching back
To switch back from the secondary cluster to the primary cluster, the following request must be made to all Gateway instances:Alternative solutions to switchover
Note that Conduktor can recommend alternative solutions for initiating the switchover that does not involve making an API call to every Gateway instance. These alternatives are dependent on your deployment configuration, therefore we recommend contacting us to discuss this.Failover limitation
During a failover event, Chargeback will only collect data for the original cluster. During a failover event data is not collected but would resume if failed back to the original cluster.Configure Gateway for multi-clusters
Gateway can be configured to communicate with multiple Kafka clusters and expose their topics to your partners. You can:- direct partners to a single endpoint,
- provide them with access to topics in multiple Kafka clusters and
- expose topics using aliases that can be different from actual topic names.
- Configure one main cluster which will be used by Gateway to store its internal state.
- Set up any number of upstream physical Kafka clusters that you want to expose through Gateway.
If you’re using partner virtual clusters to share data with external third parties, be aware that cluster IDs (e.g.,
clusterA
, clusterB
) may appear in the bootstrap server address or client logs.To prevent unintended exposure, avoid using sensitive names/information in cluster IDs.Specify your main and upstream cluster configurations, along with a Then, mount the cluster config file in the Gateway container using the configuration
gateway.roles
entry to mark the upstream clusters.cluster-config.yaml
GATEWAY_BACKEND_KAFKA_SELECTOR
:Partner virtual clusters
These steps are for setting up a partner virtual cluster manually. Alternatively, to simplify the process, we recommend creating a Partner Zone which supports multi-clusters. You can also Check out the tutorial on creating Partner Zones with multi-cluster Gateway.1. Create a partner virtual cluster
First, create a new partner virtual cluster.For partner virtual clusters,
aclEnabled
has to be true
and superUsers
must not be empty.Create this YAML file:Then, apply it:
mypartner.yaml
2. Alias your topics
Finally, create aliases for existing topics in the partner virtual cluster.Alias topics within a partner virtual cluster can only point to topics from the same physical cluster.
Create this YAML file:Then, apply it:
alias-topics.yaml
3. Create service accounts
Once the virtual cluster is created and contains the topics to expose to your partners, you’ll need to create service accounts and configure ACLs (Access Control Lists). Create two service accounts for the partner virtual cluster: one super user and one partner user. The super user will manage ACLs and grant permissions to the partner user, who will use their account to access the exposed topics.service-accounts.yaml
mypartner-super-user.properties
:
mypartner-super-user.properties
mypartner-partner-user.properties
:
mypartner-partner-user.properties
4. Create ACLs for the service accounts
Before creating ACLs, you need to know how to reach this partner virtual cluster. For that, make the following request:5. Test partner virtual cluster access
Now that the partner user has the correct ACLs, you can use their credentials to interact with the alias topics and verify that the permissions are correctly set.mypartner-partner-user.properties
file and the correct bootstrap server details with your partner.