Skip to main content
Quick navigation

What is merge cluster?

Conduktor Gateway's merge cluster brings all your Kafka clusters together into an instance for clients to access.

View the full demo in realtime

You can either follow all the steps manually, or watch the recording

Review the docker compose environment

As can be seen from docker-compose.yaml the demo environment consists of the following services:

  • gateway1
  • gateway2
  • kafka-client
  • kafka1
  • kafka2
  • kafka3
  • s1_kafka1
  • s1_kafka2
  • s1_kafka3
  • schema-registry
  • zookeeper
  • zookeeper_s1
cat docker-compose.yaml

Review the Gateway configuration

Review the Gateway configuration

cat clusters.yaml

Starting the docker environment

Start all your docker processes, wait for them to be up and ready, then run in background

  • --wait: Wait for services to be running|healthy. Implies detached mode.
  • --detach: Detached mode: Run containers in the background
docker compose up --detach --wait

Creating virtual cluster teamA

Creating virtual cluster teamA on gateway gateway1 and reviewing the configuration file to access it

# Generate virtual cluster teamA with service account sa
token=$(curl \
--request POST "http://localhost:8888/admin/vclusters/v1/vcluster/teamA/username/sa" \
--header 'Content-Type: application/json' \
--user 'admin:conduktor' \
--silent \
--data-raw '{"lifeTimeSeconds": 7776000}' | jq -r ".token")

# Create access file
echo """
bootstrap.servers=localhost:6969
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='sa' password='$token';
""" > teamA-sa.properties

# Review file
cat teamA-sa.properties

Create the topic 'cars' in main cluster

Creating on kafka1:

  • Topic cars with partitions:1 and replication-factor:1
kafka-topics \
--bootstrap-server localhost:19092,localhost:19093,localhost:19094 \
--replication-factor 1 \
--partitions 1 \
--create --if-not-exists \
--topic cars

Create the topic 'cars' in cluster1

Creating on s1_kafka1:

  • Topic cars with partitions:1 and replication-factor:1
kafka-topics \
--bootstrap-server localhost:29092,localhost:29093,localhost:29094 \
--replication-factor 1 \
--partitions 1 \
--create --if-not-exists \
--topic cars

Let's route the topic 'eu_cars', as seen by the client application, on to the 'cars' topic on the main (default) cluster

curl \
--silent \
--request POST localhost:8888/internal/alias-topic/teamA/eu_cars \
--user 'admin:conduktor' \
--header 'Content-Type: application/json' \
--data-raw '{
"clusterId": "main",
"physicalTopicName": "cars"
}' | jq

Let's route the topic 'us_cars', as seen by the client application, on to the 'cars' topic on the second cluster (cluster1)

curl \
--silent \
--request POST localhost:8888/internal/alias-topic/teamA/us_cars \
--user 'admin:conduktor' \
--header 'Content-Type: application/json' \
--data-raw '{
"clusterId": "cluster1",
"physicalTopicName": "cars"
}' | jq

Send into topic 'eu_cars'

Producing 1 message in eu_cars in cluster teamA

Sending 1 event

{
"name" : "eu_cars_record"
}

with

echo '{"name":"eu_cars_record"}' | \
kafka-console-producer \
--bootstrap-server localhost:6969 \
--producer.config teamA-sa.properties \
--topic eu_cars

Send into topic 'us_cars'

Producing 1 message in us_cars in cluster teamA

Sending 1 event

{
"name" : "us_cars_record"
}

with

echo '{"name":"us_cars_record"}' | \
kafka-console-producer \
--bootstrap-server localhost:6969 \
--producer.config teamA-sa.properties \
--topic us_cars

Consuming from eu_cars

Consuming from eu_cars in cluster teamA

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config teamA-sa.properties \
--topic eu_cars \
--from-beginning \
--timeout-ms 10000 | jq

returns 1 event

{
"name" : "eu_cars_record"
}

Consuming from us_cars

Consuming from us_cars in cluster teamA

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config teamA-sa.properties \
--topic us_cars \
--from-beginning \
--timeout-ms 10000 | jq

returns 1 event

{
"name" : "us_cars_record"
}

Verify eu_cars_record is not in main kafka

Verify eu_cars_record is not in main kafka in cluster kafka1

kafka-console-consumer \
--bootstrap-server localhost:19092,localhost:19093,localhost:19094 \
--topic cars \
--from-beginning \
--timeout-ms 10000 | jq

returns 1 event

{
"name" : "eu_cars_record"
}

Verify us_cars_record is not in cluster1 kafka

Verify us_cars_record is not in cluster1 kafka in cluster s1_kafka1

kafka-console-consumer \
--bootstrap-server localhost:29092,localhost:29093,localhost:29094 \
--topic cars \
--from-beginning \
--timeout-ms 10000 | jq

returns 1 event

{
"name" : "us_cars_record"
}

Tearing down the docker environment

Remove all your docker processes and associated volumes

  • --volumes: Remove named volumes declared in the "volumes" section of the Compose file and anonymous volumes attached to containers.
docker compose down --volumes

Conclusion

Merge cluster is simple as it