Skip to main content
Quick navigation

Multi-tenancy, virtual clusters

View the full demo in realtime

You can either follow all the steps manually, or watch the recording

Review the docker compose environment

As can be seen from docker-compose.yaml the demo environment consists of the following services:

  • gateway1
  • gateway2
  • kafka-client
  • kafka1
  • kafka2
  • kafka3
  • schema-registry
  • zookeeper
cat docker-compose.yaml

Starting the docker environment

Start all your docker processes, wait for them to be up and ready, then run in background

  • --wait: Wait for services to be running|healthy. Implies detached mode.
  • --detach: Detached mode: Run containers in the background
docker compose up --detach --wait

Listing topics in kafka1

kafka-topics \
--bootstrap-server localhost:19092,localhost:19093,localhost:19094 \
--list

Creating virtual cluster london

Creating virtual cluster london on gateway gateway1 and reviewing the configuration file to access it

# Generate virtual cluster london with service account sa
token=$(curl \
--request POST "http://localhost:8888/admin/vclusters/v1/vcluster/london/username/sa" \
--header 'Content-Type: application/json' \
--user 'admin:conduktor' \
--silent \
--data-raw '{"lifeTimeSeconds": 7776000}' | jq -r ".token")

# Create access file
echo """
bootstrap.servers=localhost:6969
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='sa' password='$token';
""" > london-sa.properties

# Review file
cat london-sa.properties

Creating virtual cluster paris

Creating virtual cluster paris on gateway gateway1 and reviewing the configuration file to access it

# Generate virtual cluster paris with service account sa
token=$(curl \
--request POST "http://localhost:8888/admin/vclusters/v1/vcluster/paris/username/sa" \
--header 'Content-Type: application/json' \
--user 'admin:conduktor' \
--silent \
--data-raw '{"lifeTimeSeconds": 7776000}' | jq -r ".token")

# Create access file
echo """
bootstrap.servers=localhost:6969
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='sa' password='$token';
""" > paris-sa.properties

# Review file
cat paris-sa.properties

Creating topic londonTopic on london

Creating on london:

  • Topic londonTopic with partitions:1 and replication-factor:1
kafka-topics \
--bootstrap-server localhost:6969 \
--command-config london-sa.properties \
--replication-factor 1 \
--partitions 1 \
--create --if-not-exists \
--topic londonTopic

Creating topic parisTopic on paris

Creating on paris:

  • Topic parisTopic with partitions:1 and replication-factor:1
kafka-topics \
--bootstrap-server localhost:6969 \
--command-config paris-sa.properties \
--replication-factor 1 \
--partitions 1 \
--create --if-not-exists \
--topic parisTopic

Listing topics in london

kafka-topics \
--bootstrap-server localhost:6969 \
--command-config london-sa.properties \
--list

Listing topics in paris

kafka-topics \
--bootstrap-server localhost:6969 \
--command-config paris-sa.properties \
--list

Producing 1 message in londonTopic

Producing 1 message in londonTopic in cluster london

Sending 1 event

{"message: "Hello from London"}

with

echo '{"message: "Hello from London"}' | \
kafka-console-producer \
--bootstrap-server localhost:6969 \
--producer.config london-sa.properties \
--topic londonTopic

Consuming from londonTopic

Consuming from londonTopic in cluster london

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config london-sa.properties \
--topic londonTopic \
--from-beginning \
--timeout-ms 10000 | jq

Producing 1 message in parisTopic

Producing 1 message in parisTopic in cluster paris

Sending 1 event

{"message: "Bonjour depuis Paris"}

with

echo '{"message: "Bonjour depuis Paris"}' | \
kafka-console-producer \
--bootstrap-server localhost:6969 \
--producer.config paris-sa.properties \
--topic parisTopic

Consuming from parisTopic

Consuming from parisTopic in cluster paris

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config paris-sa.properties \
--topic parisTopic \
--from-beginning \
--timeout-ms 10000 | jq

Creating topic existingLondonTopic on kafka1

Creating on kafka1:

  • Topic existingLondonTopic with partitions:1 and replication-factor:1
kafka-topics \
--bootstrap-server localhost:19092,localhost:19093,localhost:19094 \
--replication-factor 1 \
--partitions 1 \
--create --if-not-exists \
--topic existingLondonTopic

Producing 1 message in existingLondonTopic

Producing 1 message in existingLondonTopic in cluster kafka1

Sending 1 event

{"message: "Hello from London"}

with

echo '{"message: "Hello from London"}' | \
kafka-console-producer \
--bootstrap-server localhost:19092,localhost:19093,localhost:19094 \
--topic existingLondonTopic

Map the existing topic to the virtual cluster

curl \
--silent \
--request POST localhost:8888/admin/vclusters/v1/vcluster/london/topics/existingLondonTopic \
--user 'admin:conduktor' \
--header 'Content-Type: application/json' \
--data-raw '{
"physicalTopicName": "existingLondonTopic",
"readOnly": false,
"type": "alias"
}' | jq

Listing topics in london

kafka-topics \
--bootstrap-server localhost:6969 \
--command-config london-sa.properties \
--list

Creating topic existingSharedTopic on kafka1

Creating on kafka1:

  • Topic existingSharedTopic with partitions:1 and replication-factor:1
kafka-topics \
--bootstrap-server localhost:19092,localhost:19093,localhost:19094 \
--replication-factor 1 \
--partitions 1 \
--create --if-not-exists \
--topic existingSharedTopic

Producing 1 message in existingSharedTopic

Producing 1 message in existingSharedTopic in cluster kafka1

Sending 1 event

{
"message" : "Existing shared message"
}

with

echo '{"message": "Existing shared message"}' | \
kafka-console-producer \
--bootstrap-server localhost:19092,localhost:19093,localhost:19094 \
--topic existingSharedTopic

Map the existing topic to the virtual cluster

curl \
--silent \
--request POST localhost:8888/admin/vclusters/v1/vcluster/london/topics/existingSharedTopic \
--user 'admin:conduktor' \
--header 'Content-Type: application/json' \
--data-raw '{
"physicalTopicName": "existingSharedTopic",
"readOnly": false,
"type": "alias"
}' | jq

Listing topics in london

kafka-topics \
--bootstrap-server localhost:6969 \
--command-config london-sa.properties \
--list

Consuming from existingLondonTopic

Consuming from existingLondonTopic in cluster london

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config london-sa.properties \
--topic existingLondonTopic \
--from-beginning \
--timeout-ms 10000 | jq

returns 1 event

{"message: "Hello from London"}

Consuming from existingSharedTopic

Consuming from existingSharedTopic in cluster london

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config london-sa.properties \
--topic existingSharedTopic \
--from-beginning \
--timeout-ms 10000 | jq

returns 1 event

{
"message" : "Existing shared message"
}

Map the existing topic to the virtual cluster

curl \
--silent \
--request POST localhost:8888/admin/vclusters/v1/vcluster/paris/topics/existingSharedTopic \
--user 'admin:conduktor' \
--header 'Content-Type: application/json' \
--data-raw '{
"physicalTopicName": "existingSharedTopic",
"readOnly": false,
"type": "alias"
}' | jq

Listing topics in paris

kafka-topics \
--bootstrap-server localhost:6969 \
--command-config paris-sa.properties \
--list

Consuming from existingSharedTopic

Consuming from existingSharedTopic in cluster paris

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config paris-sa.properties \
--topic existingSharedTopic \
--from-beginning \
--timeout-ms 10000 | jq

returns 1 event

{
"message" : "Existing shared message"
}

Tearing down the docker environment

Remove all your docker processes and associated volumes

  • --volumes: Remove named volumes declared in the "volumes" section of the Compose file and anonymous volumes attached to containers.
docker compose down --volumes

Conclusion

Multi-tenancy/Virtual clusters is key to be in control of your kafka spend!