Field level encryption with Schema Registry
Yes, it work with Avro, Json Schema with nested fields
View the full demo in realtime
Review the docker compose environment
As can be seen from docker-compose.yaml
the demo environment consists of the following services:
- gateway1
- gateway2
- kafka-client
- kafka1
- kafka2
- kafka3
- schema-registry
- zookeeper
- Command
- File Content
cat docker-compose.yaml
version: '3.7'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2801
ZOOKEEPER_TICK_TIME: 2000
healthcheck:
test: nc -zv 0.0.0.0 2801 || exit 1
interval: 5s
retries: 25
labels:
tag: conduktor
kafka1:
hostname: kafka1
container_name: kafka1
image: confluentinc/cp-kafka:latest
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2801
KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL_SAME_HOST://:19092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:9092,EXTERNAL_SAME_HOST://localhost:19092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_LOG4J_LOGGERS: kafka.authorizer.logger=INFO
KAFKA_LOG4J_ROOT_LOGLEVEL: WARN
KAFKA_AUTO_CREATE_TOPICS_ENABLE: false
depends_on:
zookeeper:
condition: service_healthy
healthcheck:
test: nc -zv kafka1 9092 || exit 1
interval: 5s
retries: 25
labels:
tag: conduktor
kafka2:
hostname: kafka2
container_name: kafka2
image: confluentinc/cp-kafka:latest
ports:
- 19093:19093
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2801
KAFKA_LISTENERS: INTERNAL://:9093,EXTERNAL_SAME_HOST://:19093
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:9093,EXTERNAL_SAME_HOST://localhost:19093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_LOG4J_LOGGERS: kafka.authorizer.logger=INFO
KAFKA_LOG4J_ROOT_LOGLEVEL: WARN
KAFKA_AUTO_CREATE_TOPICS_ENABLE: false
depends_on:
zookeeper:
condition: service_healthy
healthcheck:
test: nc -zv kafka2 9093 || exit 1
interval: 5s
retries: 25
labels:
tag: conduktor
kafka3:
image: confluentinc/cp-kafka:latest
hostname: kafka3
container_name: kafka3
ports:
- 19094:19094
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2801
KAFKA_LISTENERS: INTERNAL://:9094,EXTERNAL_SAME_HOST://:19094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:9094,EXTERNAL_SAME_HOST://localhost:19094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_LOG4J_LOGGERS: kafka.authorizer.logger=INFO
KAFKA_LOG4J_ROOT_LOGLEVEL: WARN
KAFKA_AUTO_CREATE_TOPICS_ENABLE: false
depends_on:
zookeeper:
condition: service_healthy
healthcheck:
test: nc -zv kafka3 9094 || exit 1
interval: 5s
retries: 25
labels:
tag: conduktor
schema-registry:
image: confluentinc/cp-schema-registry:latest
hostname: schema-registry
container_name: schema-registry
ports:
- 8081:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka1:9092,kafka2:9093,kafka3:9094
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: WARN
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
SCHEMA_REGISTRY_SCHEMA_REGISTRY_GROUP_ID: schema-registry
volumes:
- type: bind
source: .
target: /clientConfig
read_only: true
depends_on:
kafka1:
condition: service_healthy
kafka2:
condition: service_healthy
kafka3:
condition: service_healthy
healthcheck:
test: nc -zv schema-registry 8081 || exit 1
interval: 5s
retries: 25
labels:
tag: conduktor
gateway1:
image: conduktor/conduktor-gateway:2.6.0
hostname: gateway1
container_name: gateway1
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka1:9092,kafka2:9093,kafka3:9094
GATEWAY_ADVERTISED_HOST: localhost
GATEWAY_MODE: VCLUSTER
GATEWAY_SECURITY_PROTOCOL: SASL_PLAINTEXT
GATEWAY_FEATURE_FLAGS_ANALYTICS: false
depends_on:
kafka1:
condition: service_healthy
kafka2:
condition: service_healthy
kafka3:
condition: service_healthy
ports:
- 6969:6969
- 6970:6970
- 6971:6971
- 8888:8888
healthcheck:
test: curl localhost:8888/health
interval: 5s
retries: 25
labels:
tag: conduktor
gateway2:
image: conduktor/conduktor-gateway:2.6.0
hostname: gateway2
container_name: gateway2
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka1:9092,kafka2:9093,kafka3:9094
GATEWAY_ADVERTISED_HOST: localhost
GATEWAY_MODE: VCLUSTER
GATEWAY_SECURITY_PROTOCOL: SASL_PLAINTEXT
GATEWAY_FEATURE_FLAGS_ANALYTICS: false
GATEWAY_START_PORT: 7969
depends_on:
kafka1:
condition: service_healthy
kafka2:
condition: service_healthy
kafka3:
condition: service_healthy
ports:
- 7969:7969
- 7970:7970
- 7971:7971
- 8889:8888
healthcheck:
test: curl localhost:8888/health
interval: 5s
retries: 25
labels:
tag: conduktor
kafka-client:
image: confluentinc/cp-kafka:latest
hostname: kafka-client
container_name: kafka-client
command: sleep infinity
volumes:
- type: bind
source: .
target: /clientConfig
read_only: true
labels:
tag: conduktor
networks:
demo: null
Starting the docker environment
Start all your docker processes, wait for them to be up and ready, then run in background
--wait
: Wait for services to berunning|healthy
. Implies detached mode.--detach
: Detached mode: Run containers in the background
- Command
- Output
- Recording
docker compose up --detach --wait
Network encryption-schema-registry_default Creating
Network encryption-schema-registry_default Created
Container kafka-client Creating
Container zookeeper Creating
Container kafka-client Created
Container zookeeper Created
Container kafka2 Creating
Container kafka1 Creating
Container kafka3 Creating
Container kafka1 Created
Container kafka3 Created
Container kafka2 Created
Container schema-registry Creating
Container gateway2 Creating
Container gateway1 Creating
gateway2 The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
gateway1 The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Container gateway2 Created
Container gateway1 Created
Container schema-registry Created
Container kafka-client Starting
Container zookeeper Starting
Container zookeeper Started
Container zookeeper Waiting
Container zookeeper Waiting
Container zookeeper Waiting
Container kafka-client Started
Container zookeeper Healthy
Container kafka3 Starting
Container zookeeper Healthy
Container kafka2 Starting
Container zookeeper Healthy
Container kafka1 Starting
Container kafka3 Started
Container kafka1 Started
Container kafka2 Started
Container kafka2 Waiting
Container kafka3 Waiting
Container kafka1 Waiting
Container kafka1 Waiting
Container kafka2 Waiting
Container kafka3 Waiting
Container kafka2 Waiting
Container kafka3 Waiting
Container kafka1 Waiting
Container kafka2 Healthy
Container kafka3 Healthy
Container kafka1 Healthy
Container kafka3 Healthy
Container kafka2 Healthy
Container kafka1 Healthy
Container kafka2 Healthy
Container gateway2 Starting
Container kafka1 Healthy
Container schema-registry Starting
Container kafka3 Healthy
Container gateway1 Starting
Container gateway2 Started
Container gateway1 Started
Container schema-registry Started
Container kafka3 Waiting
Container schema-registry Waiting
Container gateway1 Waiting
Container gateway2 Waiting
Container kafka-client Waiting
Container zookeeper Waiting
Container kafka1 Waiting
Container kafka2 Waiting
Container kafka2 Healthy
Container kafka1 Healthy
Container kafka3 Healthy
Container zookeeper Healthy
Container kafka-client Healthy
Container gateway2 Healthy
Container gateway1 Healthy
Container schema-registry Healthy
Let's asserts number of registered schemas
- Command
- Output
- Recording
Creating virtual cluster teamA
Creating virtual cluster teamA
on gateway gateway1
and reviewing the configuration file to access it
- Command
- Output
- Recording
# Generate virtual cluster teamA with service account sa
token=$(curl \
--request POST "http://localhost:8888/admin/vclusters/v1/vcluster/teamA/username/sa" \
--header 'Content-Type: application/json' \
--user 'admin:conduktor' \
--silent \
--data-raw '{"lifeTimeSeconds": 7776000}' | jq -r ".token")
# Create access file
echo """
bootstrap.servers=localhost:6969
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='sa' password='$token';
""" > teamA-sa.properties
# Review file
cat teamA-sa.properties
bootstrap.servers=localhost:6969
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='sa' password='eyJhbGciOiJIUzI1NiJ9.eyJ1c2VybmFtZSI6InNhIiwidmNsdXN0ZXIiOiJ0ZWFtQSIsImV4cCI6MTcxNTY1MDgzNX0.pfgclAJHSLHMJRlUE9atL_5tmv1qH0qe-0Qq2FgE4d0';
Creating topic customers on teamA
Creating on teamA
:
- Topic
customers
with partitions:1 and replication-factor:1
- Command
- Output
- Recording
Adding interceptor encrypt
We want to encrypt two fields at the root layer, and location
in the address
object.
Here we are using an in memory KMS.
Creating the interceptor named encrypt
of the plugin io.conduktor.gateway.interceptor.EncryptSchemaBasedPlugin
using the following payload
{
"pluginClass" : "io.conduktor.gateway.interceptor.EncryptSchemaBasedPlugin",
"priority" : 100,
"config" : {
"schemaRegistryConfig" : {
"host" : "http://schema-registry:8081"
},
"defaultKeySecretId" : "myDefaultKeySecret",
"defaultAlgorithm" : {
"type" : "TINK/AES128_EAX",
"kms" : "IN_MEMORY"
},
"tags" : [ "PII", "ENCRYPTION", "GDPR" ]
}
}
Here's how to send it:
- Command
- Output
- Recording
Listing interceptors for teamA
Listing interceptors on gateway1
for virtual cluster teamA
- Command
- Output
- Recording
curl \
--request GET 'http://localhost:8888/admin/interceptors/v1/vcluster/teamA' \
--header 'Content-Type: application/json' \
--user 'admin:conduktor' \
--silent | jq
{
"interceptors": [
{
"name": "encrypt",
"pluginClass": "io.conduktor.gateway.interceptor.EncryptSchemaBasedPlugin",
"apiKey": null,
"priority": 100,
"timeoutMs": 9223372036854775807,
"config": {
"schemaRegistryConfig": {
"host": "http://schema-registry:8081"
},
"defaultKeySecretId": "myDefaultKeySecret",
"defaultAlgorithm": {
"type": "TINK/AES128_EAX",
"kms": "IN_MEMORY"
},
"tags": [
"PII",
"ENCRYPTION",
"GDPR"
]
}
}
]
}
Registering schema for customers
{
"$id": "https://example.com/person.schema.json",
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "Customer",
"type": "object",
"properties": {
"name": { "type": "string" },
"username": { "type": "string" },
"password": { "type": "string" },
"visa": { "type": "string" },
"address": {
"type": "object",
"properties": {
"location": { "type": "string", "conduktor.tags": ["PII", "GDPR"] },
"town": { "type": "string" },
"country": { "type": "string" }
}
}
}
}
- Command
- Output
- Recording
Let's send unencrypted avro message
Producing 1 message in customers
in cluster teamA
- Command
- Output
- Recording
Sending 1 event
{
"name" : "tom",
"username" : "tom@conduktor.io",
"password" : "motorhead",
"visa" : "#abc123",
"address" : {
"location" : "12 Chancery lane",
"town" : "London",
"country" : "UK"
}
}
with
echo '{"name":"tom","username":"tom@conduktor.io","password":"motorhead","visa":"#abc123","address":{"location":"12 Chancery lane","town":"London","country":"UK"}}' | \
kafka-json-schema-console-producer \
--bootstrap-server localhost:6969 \
--producer.config teamA-sa.properties \
--property "value.schema.id=1" \
--property "schema.registry.url=http://localhost:8081" \
--topic customers
[2024-02-14 02:40:39,899] INFO KafkaJsonSchemaSerializerConfig values:
auto.register.schemas = true
basic.auth.credentials.source = URL
basic.auth.user.info = [hidden]
bearer.auth.cache.expiry.buffer.seconds = 300
bearer.auth.client.id = null
bearer.auth.client.secret = null
bearer.auth.credentials.source = STATIC_TOKEN
bearer.auth.custom.provider.class = null
bearer.auth.identity.pool.id = null
bearer.auth.issuer.endpoint.url = null
bearer.auth.logical.cluster = null
bearer.auth.scope = null
bearer.auth.scope.claim.name = scope
bearer.auth.sub.claim.name = sub
bearer.auth.token = [hidden]
context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy
http.connect.timeout.ms = 60000
http.read.timeout.ms = 60000
id.compatibility.strict = true
json.fail.invalid.schema = true
json.fail.unknown.properties = true
json.indent.output = false
json.oneof.for.nullables = true
json.schema.spec.version = draft_7
json.write.dates.iso8601 = false
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
latest.cache.size = 1000
latest.cache.ttl.sec = -1
latest.compatibility.strict = true
max.schemas.per.subject = 1000
normalize.schemas = false
proxy.host =
proxy.port = -1
rule.actions = []
rule.executors = []
rule.service.loader.enable = true
schema.format = null
schema.reflection = false
schema.registry.basic.auth.user.info = [hidden]
schema.registry.ssl.cipher.suites = null
schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema.registry.ssl.endpoint.identification.algorithm = https
schema.registry.ssl.engine.factory.class = null
schema.registry.ssl.key.password = null
schema.registry.ssl.keymanager.algorithm = SunX509
schema.registry.ssl.keystore.certificate.chain = null
schema.registry.ssl.keystore.key = null
schema.registry.ssl.keystore.location = null
schema.registry.ssl.keystore.password = null
schema.registry.ssl.keystore.type = JKS
schema.registry.ssl.protocol = TLSv1.3
schema.registry.ssl.provider = null
schema.registry.ssl.secure.random.implementation = null
schema.registry.ssl.trustmanager.algorithm = PKIX
schema.registry.ssl.truststore.certificates = null
schema.registry.ssl.truststore.location = null
schema.registry.ssl.truststore.password = null
schema.registry.ssl.truststore.type = JKS
schema.registry.url = [http://localhost:8081]
use.latest.version = false
use.latest.with.metadata = null
use.schema.id = -1
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
(io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializerConfig:376)
Registering schema for customers
{
"$id": "https://example.com/person.schema.json",
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "Customer",
"type": "object",
"properties": {
"name": { "type": "string" },
"username": { "type": "string" },
"password": { "type": "string", "conduktor.keySecretId": "password-secret"},
"visa": { "type": "string", "conduktor.keySecretId": "visa-secret" },
"address": {
"type": "object",
"properties": {
"location": { "type": "string", "conduktor.tags": ["PII", "GDPR"] },
"town": { "type": "string" },
"country": { "type": "string" }
}
}
}
}
- Command
- Output
- Recording
Schema diff
14c14,15
< "type": "string"
---
> "type": "string",
> "conduktor.keySecretId": "password-secret"
17c18,19
< "type": "string"
---
> "type": "string",
> "conduktor.keySecretId": "visa-secret"
Let's make sure they are encrypted
password and visa and the nested field address.location
are encrypted
- Command
- Output
- Recording
kafka-json-schema-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config teamA-sa.properties \
--topic customers \
--from-beginning \
--timeout-ms 10000 \
--property "schema.registry.url=http://localhost:8081"| grep '{' | jq
returns
Processed a total of 1 messages
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
Processed a total of 1 messages
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
Let's send unencrypted avro message
Producing 2 messages in customers
in cluster teamA
- Command
- Output
- Recording
Sending 2 events
{
"name" : "tom",
"username" : "tom@conduktor.io",
"password" : "motorhead",
"visa" : "#abc123",
"address" : {
"location" : "12 Chancery lane",
"town" : "London",
"country" : "UK"
}
}
{
"name" : "laura",
"username" : "laura@conduktor.io",
"password" : "kitesurf",
"visa" : "#888999XZ;",
"address" : {
"location" : "4th Street, Jumeirah",
"town" : "Dubai",
"country" : "UAE"
}
}
with
echo '{"name":"tom","username":"tom@conduktor.io","password":"motorhead","visa":"#abc123","address":{"location":"12 Chancery lane","town":"London","country":"UK"}}' | \
kafka-json-schema-console-producer \
--bootstrap-server localhost:6969 \
--producer.config teamA-sa.properties \
--property "value.schema.id=1" \
--property "schema.registry.url=http://localhost:8081" \
--topic customers
echo '{"name":"laura","username":"laura@conduktor.io","password":"kitesurf","visa":"#888999XZ;","address":{"location":"4th Street, Jumeirah","town":"Dubai", "country":"UAE"}}' | \
kafka-json-schema-console-producer \
--bootstrap-server localhost:6969 \
--producer.config teamA-sa.properties \
--property "value.schema.id=1" \
--property "schema.registry.url=http://localhost:8081" \
--topic customers
[2024-02-14 02:40:53,366] INFO KafkaJsonSchemaSerializerConfig values:
auto.register.schemas = true
basic.auth.credentials.source = URL
basic.auth.user.info = [hidden]
bearer.auth.cache.expiry.buffer.seconds = 300
bearer.auth.client.id = null
bearer.auth.client.secret = null
bearer.auth.credentials.source = STATIC_TOKEN
bearer.auth.custom.provider.class = null
bearer.auth.identity.pool.id = null
bearer.auth.issuer.endpoint.url = null
bearer.auth.logical.cluster = null
bearer.auth.scope = null
bearer.auth.scope.claim.name = scope
bearer.auth.sub.claim.name = sub
bearer.auth.token = [hidden]
context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy
http.connect.timeout.ms = 60000
http.read.timeout.ms = 60000
id.compatibility.strict = true
json.fail.invalid.schema = true
json.fail.unknown.properties = true
json.indent.output = false
json.oneof.for.nullables = true
json.schema.spec.version = draft_7
json.write.dates.iso8601 = false
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
latest.cache.size = 1000
latest.cache.ttl.sec = -1
latest.compatibility.strict = true
max.schemas.per.subject = 1000
normalize.schemas = false
proxy.host =
proxy.port = -1
rule.actions = []
rule.executors = []
rule.service.loader.enable = true
schema.format = null
schema.reflection = false
schema.registry.basic.auth.user.info = [hidden]
schema.registry.ssl.cipher.suites = null
schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema.registry.ssl.endpoint.identification.algorithm = https
schema.registry.ssl.engine.factory.class = null
schema.registry.ssl.key.password = null
schema.registry.ssl.keymanager.algorithm = SunX509
schema.registry.ssl.keystore.certificate.chain = null
schema.registry.ssl.keystore.key = null
schema.registry.ssl.keystore.location = null
schema.registry.ssl.keystore.password = null
schema.registry.ssl.keystore.type = JKS
schema.registry.ssl.protocol = TLSv1.3
schema.registry.ssl.provider = null
schema.registry.ssl.secure.random.implementation = null
schema.registry.ssl.trustmanager.algorithm = PKIX
schema.registry.ssl.truststore.certificates = null
schema.registry.ssl.truststore.location = null
schema.registry.ssl.truststore.password = null
schema.registry.ssl.truststore.type = JKS
schema.registry.url = [http://localhost:8081]
use.latest.version = false
use.latest.with.metadata = null
use.schema.id = -1
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
(io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializerConfig:376)
[2024-02-14 02:40:54,871] INFO KafkaJsonSchemaSerializerConfig values:
auto.register.schemas = true
basic.auth.credentials.source = URL
basic.auth.user.info = [hidden]
bearer.auth.cache.expiry.buffer.seconds = 300
bearer.auth.client.id = null
bearer.auth.client.secret = null
bearer.auth.credentials.source = STATIC_TOKEN
bearer.auth.custom.provider.class = null
bearer.auth.identity.pool.id = null
bearer.auth.issuer.endpoint.url = null
bearer.auth.logical.cluster = null
bearer.auth.scope = null
bearer.auth.scope.claim.name = scope
bearer.auth.sub.claim.name = sub
bearer.auth.token = [hidden]
context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy
http.connect.timeout.ms = 60000
http.read.timeout.ms = 60000
id.compatibility.strict = true
json.fail.invalid.schema = true
json.fail.unknown.properties = true
json.indent.output = false
json.oneof.for.nullables = true
json.schema.spec.version = draft_7
json.write.dates.iso8601 = false
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
latest.cache.size = 1000
latest.cache.ttl.sec = -1
latest.compatibility.strict = true
max.schemas.per.subject = 1000
normalize.schemas = false
proxy.host =
proxy.port = -1
rule.actions = []
rule.executors = []
rule.service.loader.enable = true
schema.format = null
schema.reflection = false
schema.registry.basic.auth.user.info = [hidden]
schema.registry.ssl.cipher.suites = null
schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema.registry.ssl.endpoint.identification.algorithm = https
schema.registry.ssl.engine.factory.class = null
schema.registry.ssl.key.password = null
schema.registry.ssl.keymanager.algorithm = SunX509
schema.registry.ssl.keystore.certificate.chain = null
schema.registry.ssl.keystore.key = null
schema.registry.ssl.keystore.location = null
schema.registry.ssl.keystore.password = null
schema.registry.ssl.keystore.type = JKS
schema.registry.ssl.protocol = TLSv1.3
schema.registry.ssl.provider = null
schema.registry.ssl.secure.random.implementation = null
schema.registry.ssl.trustmanager.algorithm = PKIX
schema.registry.ssl.truststore.certificates = null
schema.registry.ssl.truststore.location = null
schema.registry.ssl.truststore.password = null
schema.registry.ssl.truststore.type = JKS
schema.registry.url = [http://localhost:8081]
use.latest.version = false
use.latest.with.metadata = null
use.schema.id = -1
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
(io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializerConfig:376)
laura's password and visa are also encrypted
laura's password and visa are also encrypted in cluster teamA
- Command
- Output
- Recording
kafka-json-schema-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config teamA-sa.properties \
--topic customers \
--from-beginning \
--timeout-ms 10000 \
--property "schema.registry.url=http://localhost:8081"| grep '{' | jq
returns
Processed a total of 3 messages
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "laura",
"username": "laura@conduktor.io",
"password": "kitesurf",
"visa": "#888999XZ;",
"address": {
"location": "4th Street, Jumeirah",
"town": "Dubai",
"country": "UAE"
}
}
Processed a total of 3 messages
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "laura",
"username": "laura@conduktor.io",
"password": "kitesurf",
"visa": "#888999XZ;",
"address": {
"location": "4th Street, Jumeirah",
"town": "Dubai",
"country": "UAE"
}
}
Adding interceptor decrypt
Let's add the decrypt interceptor to decipher messages
Creating the interceptor named decrypt
of the plugin io.conduktor.gateway.interceptor.DecryptPlugin
using the following payload
{
"pluginClass" : "io.conduktor.gateway.interceptor.DecryptPlugin",
"priority" : 100,
"config" : {
"topic" : "customers",
"schemaRegistryConfig" : {
"host" : "http://schema-registry:8081"
}
}
}
Here's how to send it:
- Command
- Output
- Recording
Listing interceptors for teamA
Listing interceptors on gateway1
for virtual cluster teamA
- Command
- Output
- Recording
curl \
--request GET 'http://localhost:8888/admin/interceptors/v1/vcluster/teamA' \
--header 'Content-Type: application/json' \
--user 'admin:conduktor' \
--silent | jq
{
"interceptors": [
{
"name": "decrypt",
"pluginClass": "io.conduktor.gateway.interceptor.DecryptPlugin",
"apiKey": null,
"priority": 100,
"timeoutMs": 9223372036854775807,
"config": {
"topic": "customers",
"schemaRegistryConfig": {
"host": "http://schema-registry:8081"
}
}
},
{
"name": "encrypt",
"pluginClass": "io.conduktor.gateway.interceptor.EncryptSchemaBasedPlugin",
"apiKey": null,
"priority": 100,
"timeoutMs": 9223372036854775807,
"config": {
"schemaRegistryConfig": {
"host": "http://schema-registry:8081"
},
"defaultKeySecretId": "myDefaultKeySecret",
"defaultAlgorithm": {
"type": "TINK/AES128_EAX",
"kms": "IN_MEMORY"
},
"tags": [
"PII",
"ENCRYPTION",
"GDPR"
]
}
}
]
}
Let's make sure they are decrypted
password and visa and the nested field address.location
are decrypted
- Command
- Output
- Recording
kafka-json-schema-console-consumer \
--bootstrap-server localhost:6969 \
--consumer.config teamA-sa.properties \
--topic customers \
--from-beginning \
--timeout-ms 10000 \
--property "schema.registry.url=http://localhost:8081"| grep '{' | jq
returns
Processed a total of 3 messages
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "laura",
"username": "laura@conduktor.io",
"password": "kitesurf",
"visa": "#888999XZ;",
"address": {
"location": "4th Street, Jumeirah",
"town": "Dubai",
"country": "UAE"
}
}
Processed a total of 3 messages
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "tom",
"username": "tom@conduktor.io",
"password": "motorhead",
"visa": "#abc123",
"address": {
"location": "12 Chancery lane",
"town": "London",
"country": "UK"
}
}
{
"name": "laura",
"username": "laura@conduktor.io",
"password": "kitesurf",
"visa": "#888999XZ;",
"address": {
"location": "4th Street, Jumeirah",
"town": "Dubai",
"country": "UAE"
}
}
Tearing down the docker environment
Remove all your docker processes and associated volumes
--volumes
: Remove named volumes declared in the "volumes" section of the Compose file and anonymous volumes attached to containers.
- Command
- Output
- Recording
docker compose down --volumes
Container kafka-client Stopping
Container gateway2 Stopping
Container gateway1 Stopping
Container schema-registry Stopping
Container schema-registry Stopped
Container schema-registry Removing
Container schema-registry Removed
Container gateway1 Stopped
Container gateway1 Removing
Container gateway1 Removed
Container gateway2 Stopped
Container gateway2 Removing
Container gateway2 Removed
Container kafka3 Stopping
Container kafka2 Stopping
Container kafka1 Stopping
Container kafka1 Stopped
Container kafka1 Removing
Container kafka1 Removed
Container kafka3 Stopped
Container kafka3 Removing
Container kafka3 Removed
Container kafka-client Stopped
Container kafka-client Removing
Container kafka-client Removed
Container kafka2 Stopped
Container kafka2 Removing
Container kafka2 Removed
Container zookeeper Stopping
Container zookeeper Stopped
Container zookeeper Removing
Container zookeeper Removed
Network encryption-schema-registry_default Removing
Network encryption-schema-registry_default Removed
Conclusion
Yes, encryption in the Kafka world can be simple!