Skip to main content
Configure audit logging, data masking, header injection, the message integrity Interceptor and encryption Interceptors to secure and track data in your Kafka topics.
Job to doInterceptor to use
Track the APIs used by an applicationAudit
Hide sensitive fields from consumersData masking
Add metadata to messagesHeader injection
Sign and verify message integrityMessage integrity Interceptor
Encrypt dataEncryption

Common configuration

The following configurations are shared across multiple Interceptors:

Environment variables as secrets

To ensure your secrets don’t appear in your Interceptors, you can refer to the environment variables set in your Gateway container. Use the format ${MY_ENV_VAR}. We recommend using this for schema registry or Vault secrets and any other values you’d like to hide in the configuration.

Audit Interceptor

This Interceptor logs information from API key requests. To use it, inject it and implement ApiKeyAuditLog interface for audit. The currently supported Kafka API requests are:
  • ProduceRequest (PRODUCE)
  • FetchRequest (FETCH)
  • CreateTopicRequest (CREATE_TOPICS)
  • DeleteTopicRequest (DELETE_TOPICS)
  • AlterConfigRequest (ALTER_CONFIGS)

Configure audit Interceptor

NameTypeDefaultDescription
topicString.*Topics that match this regex will have the Interceptor applied
apiKeysSet[string]Set of Kafka API keys to be audited
vclusterString.*vcluster that matches this regex will have the Interceptor applied
usernameString.*username that matches this regex will have the Interceptor applied
consumerGroupIdString.*consumerGroupId that matches this regex will have the Interceptor applied
topicPartitionsSet[Integer]Set of topic partitions to be audited

Audit Interceptor example

curl \
  --request PUT \
  --url 'http://localhost:8888/gateway/v2/interceptor' \
  --header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
  --header 'Content-Type: application/json' \
  --data-raw '{
  "name": "myAuditInterceptorPlugin",
  "pluginClass": "io.conduktor.gateway.interceptor.AuditPlugin",
  "priority": 100,
  "config": {
    "topic": ".*",
    "apiKeys": [
      "PRODUCE",
      "FETCH"
    ],
    "vcluster": ".*",
    "username": ".*",
    "consumerGroupId": ".*",
    "topicPartitions": [
      1,
      2
    ]
  }
}'

Data masking Interceptor

Field level data masking Interceptor masks sensitive fields within messages as they are consumed.

Configure data masking Interceptor

The policies will be applied when consuming messages.
KeyTypeDefaultDescription
topicString.*Topics that match this regex will have the Interceptor applied.
policiesPolicy listList of your masking policies.
errorPolicyStringfail_fetchDetermines the plugin behavior when it can’t parse a fetched message without an associated schema: fail_fetch or skip_masking.
schemaRegistryConfigSchema registryThe schema registry in use. Required for Avro, JSON Schema or Protobuf data.

Data masking policy

KeyTypeDescription
nameStringUnique name to identify your policy.
fieldsSet of StringSet of fields that should be obfuscated with the masking rule. Fields can be in a nested structure with dot .. For example: education.account.username, banks[0].accountNo or banks[*].accountNo.
ruleRuleMasking rule to apply.

Data masking rule

KeyTypeDefaultDescription
typeMasking typeMASK_ALLThe type of masking (see below).
maskingCharString*The character(s) used for masking data.
numberOfCharsnumberNumber of masked characters, required if type != MASK_ALL.

Masking type

  • MASK_ALL: all data will be masked
  • MASK_FIRST_N: the first n characters will be masked
  • MASK_LAST_N: the last n characters will be masked

Error policy

You can control the plugin behavior when it can’t parse a fetched message through its errorPolicy which can be set to fail_fetch or skip_masking.
The error policy only applies to messages that do not have an associated schema. When a message has a schema (Avro, JSON Schema or Protobuf), the plugin uses the schema to parse the message and the error policy is not triggered.
The default is fail_fetch. In this mode, the plugin will return a failure to read the batch which the fetch record is part of, effectively blocking any consumer. In skip_masking mode, if there’s a failure to parse a message being fetched (e.g. an encrypted record or a schemaless message that can’t be parsed), then that record is skipped and returned un-masked.

Full payload encryption compatibility

Data masking is compatible with full payload encryption. When both Interceptors are applied to the same topic, data masking automatically detects records with full payload encryption headers and skips them, preventing deserialization errors that would otherwise occur when attempting to mask encrypted content. Check out the encryption configuration for details. Field level encryption is not affected by this behavior.

Schema registry

Gateway supports Confluent-like and AWS Glue schema registries.
KeyTypeDefaultDescription
typestringCONFLUENTThe type of schema registry to use: choose CONFLUENT (for Confluent-like schema registries including OSS Kafka) or AWS for AWS Glue schema registries.
additionalConfigsmapAdditional properties maps to specific security-related parameters. For enhanced security, you can hide the sensitive values using environment variables as secrets.
Confluent-likeConfiguration for Confluent-like schema registries
hoststringURL of your schema registry.
cacheSizestring50Number of schemas that can be cached locally by this Interceptor so that it doesn’t have to query the schema registry every time.
AWS GlueConfiguration for AWS Glue schema registries
regionstringThe AWS region for the schema registry, e.g. us-east-1.
registryNamestringThe name of the schema registry in AWS (leave blank for the AWS default of default-registry).
basicCredentialsstringAccess credentials for AWS.
AWS credentialsAWS credential configuration
accessKeystringThe access key for the connection to the schema registry.
secretKeystringThe secret key for the connection to the schema registry.
validateCredentialsbooltruetrue / false flag to determine whether the credentials provided should be validated when set.
accountIdstringThe Id for the AWS account to use.
If you don’t supply a basicCredentials section for the AWS Glue schema registry, the client will attempt to find the connection information it needs from the environment (see AWS docs for details ) and the credentials required can be passed this way to Gateway as part of its core configuration. Read our blog about schema registry.

Data masking Interceptor example

curl \
  --request PUT \
  --url 'http://localhost:8888/gateway/v2/interceptor' \
  --header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
  --header 'Content-Type: application/json' \
  --data-raw '{
  "name": "myFieldLevelDataMaskingPlugin",
  "pluginClass": "io.conduktor.gateway.interceptor.FieldLevelDataMaskingPlugin",
  "priority": 100,
  "config": {
    "schemaRegistryConfig": {
      "host": "http://schema-registry:8081"
    },
    "policies": [
      {
        "name": "Mask password",
        "rule": {
          "type": "MASK_ALL"
        },
        "fields": [
          "password"
        ]
      },
      {
        "name": "Mask visa",
        "rule": {
          "type": "MASK_LAST_N",
          "maskingChar": "X",
          "numberOfChars": 4
        },
        "fields": [
          "visa"
        ]
      }
    ]
  }
}'

Secured schema registry

curl \
  --request PUT \
  --url 'http://localhost:8888/gateway/v2/interceptor' \
  --header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
  --header 'Content-Type: application/json' \
  --data-raw '{
  "name": "myFieldLevelDataMaskingPlugin",
  "pluginClass": "io.conduktor.gateway.interceptor.FieldLevelDataMaskingPlugin",
  "priority": 100,
  "config": {
    "schemaRegistryConfig": {
      "host": "http://schema-registry:8081",
      "additionalConfigs": {
        "schema.registry.url": "${SR_URL}",
        "basic.auth.credentials.source": "${SR_BASIC_AUTH_CRED_SRC}",
        "basic.auth.user.info": "${SR_BASIC_AUTH_USER_INFO}"
      }
    },
    "policies": [
      {
        "name": "Mask password",
        "rule": {
          "type": "MASK_ALL"
        },
        "fields": [
          "password"
        ]
      },
      {
        "name": "Mask visa",
        "rule": {
          "type": "MASK_LAST_N",
          "maskingChar": "X",
          "numberOfChars": 4
        },
        "fields": [
          "visa"
        ]
      }
    ]
  }
}'

Dynamic header injection Interceptor

This Interceptor injects headers (such as user IP) to the messages as they are produced through Gateway. We support templating in this format: X-CLIENT_IP: "{{userIp}} testing".

Context variables

These values are available as template variables:
  • uuid
  • userIp
  • vcluster
  • user
  • clientId
  • gatewayIp
  • gatewayHost
  • gatewayVersion
  • apiKey
  • apiKeyVersion
  • timestampMillis

Record extraction templates

You can also extract fields from the record key or value:
  • {{record.key}} - extract the entire key payload as a string
  • {{record.value}} - extract the entire value payload as a string
  • {{record.key.fieldName}} - extract a specific field from the record key
  • {{record.value.fieldName}} - extract a specific field from the record value
For example, if your record has a key with a field named “id”, you can use {{record.key.id}} to extract that value and inject it as a header.
To use field extraction (record.key.fieldName or record.value.fieldName) with Avro, JSON Schema or Protobuf data, you have to configure schemaRegistryConfig so Gateway can deserialize the records. For plain JSON data, no schema registry is needed.

Configure header injection Interceptor

ConfigTypeDefaultDescription
topicStringRegular expression that matches topics from your produce request.
headersMapMap of header key and header value to inject. Values can use template variables like {{userIp}} or record extraction patterns.
overrideIfExistsbooleanfalseWhether to override headers that already exist on the record.
failOnErrorbooleanfalseWhether to fail the request if header injection fails. When false, errors are logged and the request continues.
schemaRegistryConfigSchema registrySchema Registry configuration. Required when using record.key.field or record.value.field templates with schema-encoded data.

Error handling

The failOnError setting controls how the Interceptor handles errors during header injection:
  • When false (default): errors are logged as warnings and the request continues processing. Headers that fail to be injected are skipped.
  • When true: any error during header injection (such as deserialization failures or missing fields) causes the request to fail with a clear error message.

Header injection Interceptor example

curl \
  --request PUT \
  --url 'http://localhost:8888/gateway/v2/interceptor' \
  --header 'Authorization: Basic YWRtaW46Y29uZHVrdG9y' \
  --header 'Content-Type: application/json' \
  --data-raw '{
  "name": "myDynamicHeaderInjectionInterceptor",
  "pluginClass": "io.conduktor.gateway.interceptor.DynamicHeaderInjectionPlugin",
  "priority": 100,
  "config": {
    "topic": "topic.*",
    "headers": {
      "X-CLIENT_IP": "{{userIp}} testing",
      "X-USER-ID": "{{record.key.id}}",
      "X-USER-EMAIL": "{{record.value.email}}"
    },
    "overrideIfExists": true,
    "failOnError": false,
    "schemaRegistryConfig": {
      "host": "http://schema-registry:8081"
    }
  }
}'
Let’s produce a simple record to the injectHeaderTopic topic.
echo 'inject_header' | docker-compose exec -T kafka-client \
    kafka-console-producer  \
        --bootstrap-server conduktor-gateway:6969 \
        --producer.config /clientConfig/gateway.properties \
        --topic injectHeaderTopic
Let’s consume from our injectHeaderTopic.
docker-compose exec kafka-client \
  kafka-console-consumer \
    --bootstrap-server conduktor-gateway:6969 \
    --consumer.config /clientConfig/gateway.properties \
    --topic injectHeaderTopic \
    --from-beginning \
    --max-messages 1 \
    --property print.headers=true
You should see the message with headers as below
X-USER_IP:172.19.0.3 testing   inject_header

Message integrity Interceptor

The message integrity Interceptor signs Kafka records on produce and verifies them on fetch, letting consumers detect whether a record changed after it was produced. Two plugins work together:
  • ProduceIntegrityPolicyPlugin signs records using HMAC-SHA256 through Google Tink.
  • FetchIntegrityPolicyPlugin verifies signatures and drops or allows records based on your policy.
You store signing keys in your HashiCorp Vault Key-Value (KV) v2 instance. Gateway reads and caches them locally.

Ordering with other Interceptors

The message integrity Interceptor is always the outermost layer — signing runs last on produce (after all other Interceptors have transformed the record) and verification runs first on fetch (before any normal Interceptor runs). This ensures the signature covers the final produced payload and is verified before any transformation on consume. Gateway enforces this ordering automatically. You don’t have to set specific priority values for integrity Interceptors — Gateway places them in fixed pipeline positions regardless of their configured priority:
  • Produce: the sign plugin always runs after all other Interceptors.
  • Fetch: the verify plugin always runs before all other Interceptors.
Gateway also validates that two integrity Interceptors of the same type don’t have overlapping scopes (Virtual Cluster, group or username). If they do, Gateway rejects the configuration. The Interceptor reads Vault credentials only from its config. You have to specify credential fields (such as token, roleId or secretId) in the config. You can set their values with placeholders like token: "${VAULT_TOKEN}", which Gateway resolves when it loads the config. Gateway does not fall back to environment variable names when you omit a credential field (for example, there is no built-in fallback to VAULT_TOKEN if you leave out token).
When Vault is unreachable (for example, on a cache miss), Gateway propagates the error to the Kafka client: producers receive the failure on produce and consumers receive it on fetch. Gateway does not silently drop records in these cases.

Configure the secretKeyUri

secretKeyUri points to a specific field in a KV v2 secret. Use the format <mount>/data/<path>#<fieldName>. Examples: secret/data/signing-key#key, secret/data/app/keys/signing#hmacKey.

Manage key versions in Vault KV v2

Vault KV v2 versions secrets: each write to the same path creates a new version. Gateway handles versions as follows:
  • Produce (sign): Gateway uses the latest version of the secret at the path you configured. When you write a new value to that path in Vault, Gateway picks it up once the cache entry expires (see cache.ttlMs). Each signed record stores the key version in its signature header.
  • Fetch (verify): The signature on each record identifies the key version used to sign it. Gateway fetches that exact version from Vault to verify, so records signed with an older version still verify correctly after you rotate to a newer version.
  • Older versions: Keep older secret versions readable in Vault until you no longer need to verify records signed with them (for example, until they are consumed or past your retention period).

Configure produce (sign) plugin

If a record already has the signature header, the Interceptor throws PolicyViolationException and does not re-sign.
KeyTypeDefaultDescription
topicString.*Topics matching this regex have the Interceptor applied
signatureHeaderStringconduktor.integrity.signatureHeader name for the signature
secretKeyUriStringVault KV v2 path and field (for example, secret/data/signing-key#key)
keyProviderConfigObjectKey provider (Vault); see Authenticate with Vault below
cache.ttlMsLong300000How long (in milliseconds) the signing key is cached before Gateway fetches it again from Vault. A shorter TTL means rotated keys are picked up faster (default: five minutes)
cache.maxSizeInteger100Maximum number of keys to cache

Configure fetch (verify) plugin

Gateway drops records that fail verification (missing signature, malformed header or invalid MAC (Message Authentication Code)) and emits an audit event. After successful verification, Gateway removes the signature header from the record before returning it to the consumer. For other errors (such as Vault being unreachable on a cache miss), Gateway propagates the error to the Kafka client and does not silently drop the record. When a record is dropped or allowed with missing signature, the fetch (verify) plugin emits a fetch response audit event (error level). To receive these events, enable the audit feature with GATEWAY_FEATURE_FLAGS_AUDIT (see Audit logs and Environment variables). Audit event details:
  • Event type: fetch response audit event (level: error)
  • Information included: topic, partition, offset, Interceptor name, plugin name (FetchIntegrityPolicyPlugin) and a message describing the reason
  • Reason values: missing_signature (no signature header), malformed_signature (header could not be decoded), verification_failed:INVALID_SIGNATURE (MAC does not match) or verification_failed:UNKNOWN_KEY (key version not found in Vault)
KeyTypeDefaultDescription
topicString.*Topics matching this regex have the Interceptor applied
signatureHeaderStringconduktor.integrity.signatureHeader name where Gateway stores the signature
missingSignaturePolicyStringSKIPWhen a record has no signature header: SKIP (drop and audit) or ALLOW (audit and keep in response)
keyProviderConfigObjectKey provider (Vault); see Authenticate with Vault below
cache.ttlMsLong300000How long (in milliseconds) a verification key stays in cache before Gateway fetches it again from Vault. This controls how long previously seen keys are reused without a Vault lookup
cache.maxSizeInteger100Maximum number of keys to cache

Authenticate with Vault for message integrity

All auth types use the common fields: uri (required) and optionally namespace, openTimeoutSeconds (default five), readTimeoutSeconds (default 30), keyStore, trustStore and connectionBackoff. Set type to one of the following and add the corresponding fields. You can also configure TLS for the Vault connection:
  • keyStore: set keyStorePath and keyStorePassword for client certificate authentication
  • trustStore: set trustStorePath and trustStorePassword to verify the Vault server certificate
typeRequired fieldsOptional fields
TOKENtoken
APP_ROLEroleId, secretIdmount (default approle)
KUBERNETESroletokenPath (default /var/run/secrets/kubernetes.io/serviceaccount/token), mount (default kubernetes)
USERNAME_PASSWORDusername, passwordmount (default userpass)
GITHUBtokenmount (default github)
LDAPusername, passwordmount (default ldap)
GCProle, jwt
AWS_EC2_PKCS7pkcs7role, nonce, mount (default aws)
AWS_EC2identity, signaturerole, nonce, mount (default aws)
AWS_IAMiamRequestUrl, iamRequestBody, iamRequestHeadersrole, mount (default aws)
JWTprovider, role, jwt
All auth types except TOKEN support automatic token renewal. Gateway renews Vault tokens in the background so your Interceptor continues to work without interruption. Optional connectionBackoff (for transient Vault failures): backoffDelay (default five), backoffMaxDelay (default 30), backoffChronoUnit (default SECONDS), backoffDelayFactor (default 1.1).

Set up Vault for message integrity

  1. Enable KV v2: vault secrets enable -version=2 kv
  2. Create a signing key for HMAC-SHA256:
    • The key material has to be at least 32 bytes (256 bits) after decoding. Gateway enforces this per NIST SP 800-107 Rev 1 and rejects shorter keys with an error.
    • Store the key in a KV v2 secret as a base64-encoded string. Gateway decodes the Base64 value and uses the resulting bytes, so the decoded length has to be at least 32 bytes.
    • The field name in the secret has to match the <fieldName> in your secretKeyUri (for example, secret/data/signing-key#key uses field name key).
    • Example: generate 32 random bytes, base64-encode them for storage, then write to Vault: KEY=$(openssl rand -base64 32) then vault kv put -mount=secret signing-key key="$KEY"
  3. Create a policy for Gateway with read access on the secret path (for example, path "secret/data/signing-key" { capabilities = ["read"] })

Understand the signature format

Gateway stores each signature in a Kafka header as JSON with two fields: k = the secretKeyUri with its version (for example, secret/data/signing-key#key@1) and s = the base64-encoded HMAC-SHA256 MAC.

Message integrity Interceptor examples

apiVersion: gateway/v2
kind: Interceptor
metadata:
  name: integrity-sign
  scope:
    vCluster: passthrough
spec:
  pluginClass: io.conduktor.gateway.interceptor.integrity.ProduceIntegrityPolicyPlugin
  config:
    topic: ".*"
    signatureHeader: "conduktor.integrity.signature"
    secretKeyUri: "secret/data/signing-key#key"
    keyProviderConfig:
      vault:
        uri: "http://localhost:8200"
        type: TOKEN
        token: "${VAULT_TOKEN}"
    cache:
      ttlMs: 3600000
      maxSize: 100
Apply with: conduktor apply -f integrity-sign-interceptor.yaml and conduktor apply -f integrity-verify-interceptor.yaml.

Encryption Interceptors

Gateway encrypts your Kafka data as it passes through the proxy, before it reaches the broker. Unlike TLS (Transport-Level Encryption), Gateway encryption ensures data remains encrypted when stored on Kafka brokers. The section covers all of the configuration options available for every encryption Interceptor. You can also check out other resources:

Encryption configuration

The properties detailed in this section work for the following plugins:
On produceOn consume (DEPRECATED)
List-basedEncryptPluginFetchEncryptPlugin
Schema-basedEncryptSchemaBasedPluginFetchEncryptSchemaBasedPlugin
On consume encryption plugins were deprecated in Gateway v3.16.0 and will be removed in Gateway v3.19.0.
Both schema-based and list-based encryption plugins have their configuration, but some properties are common to both of them.
KeyTypeDefaultDescription
Common properties
topicString.*Topics matching this regex will have the interceptor applied.
schemaRegistryConfigSchemaRegistryConfiguration of your schema registry. Required if you want to encrypt data produced using Avro, JSON or Protobuf schemas.
schemaDataModeStringpreserve_avroAs of Gateway v3.3, you can decide to preserve the inbound message format when it encrypts the data if the incoming data is Avro, rather than converting the message to JSON (as per current behavior).
To convert the record to JSON and break the link to its schema in the backing topic, you can set this field to convert_json (default until v3.3).
kmsConfigKMSConfiguration of one or multiple KMS.
enableAuditLogOnErrorBooleantrueThe audit log will be enabled when an error occurs during encryption/decryption
compressionTypeEnumnoneThe data is compressed before encryption (only for data configured with full payload encryption). Available values are: none, gzip, snappy, lz4 or zstd.
throttleTimeMsInteger0When encryption fails, apply client throttling for the specified time in milliseconds. This helps prevent overwhelming the system during encryption failures. Find out more about client throttling
errorPolicyStringfail_on_encryptedDetermines the plugin behavior when it encounters a record that’s already encrypted. Possible values: fail_on_encrypted, skip_already_encrypted
List-based properties
recordValueValue and key encryptionConfiguration to encrypt the record value.
recordKeyValue and key encryptionConfiguration to encrypt the record key.
recordHeaderHeaders encryptionConfiguration to encrypt the record headers.
Schema-based properties
defaultKeySecretIdSecret key templateDefault keySecretId to use if none is set in the schema. It must be a unique identifier for the secret key, and can be a template for Crypto Shredding use cases.
defaultAlgorithmAlgorithmAES128_GCMDefault algorithm to use if no algorithm is set in the schema.
tagsList[String]List of tags to search for in the schema to encrypt the specified fields.
namespaceStringconduktor.Prefix of custom schema constraints for encryption.

List-based

Decide what you want to encrypt:
  • Record value and record key:
    • Encrypt a set of fields
    • Encrypt the full payload
  • or header keys:
    • Encrypt a set of fields
    • Encrypt the full payload
    • Encrypt a set of headers that match a regex
Record values and record keys Set the following properties for recordValue (value encryption) and/or recordKey (key encryption):
KeyTypeDefaultDescription
Full-payload encryption
payload.keySecretIdSecret key templateSecret key, can be a template for Crypto Shredding use cases.
payload.algorithmAlgorithmAES128_GCMAlgorithm to leverage.
Field-Level encryption
fields[].fieldNameStringName of the field to encrypt. It can be a nested structure with a dot . such as education.account.username or banks[0].accountNo.
fields[].keySecretIdSecret Key templateAES128_GCMUnique identifier for the secret key. You can store this key in your KMS by using the KMS key templates. It can be a template for crypto shredding use cases.
fields[].algorithmAlgorithmAlgorithm to use to encrypt this field.
Check out the encryption examples. Header keys Set the following properties for recordHeader:
KeyTypeDefaultDescription
Full-payload encryptionConfiguration to encrypt the full payload.
payload.keySecretIdSecret key templateSecret key, can be a template for crypto shredding use cases.
payload.algorithmAlgorithmAES128_GCMAlgorithm to leverage.
Field-level encryption
fields[].fieldNameStringName of the field to encrypt. It can be a nested structure with a dot . such as education.account.username or banks[0].accountNo.
fields[].keySecretIdSecret key templateAES128_GCMUnique identifier for the secret key. It can be a template for crypto shredding use cases.
fields[].algorithmAlgorithm
Headers encryption
headerStringHeaders that match this regex will be encrypted.
It can encrypt all headers including Gateway headers.
Check out the encryption example.

Schema-based

In order to encrypt your data, you can set a few constraints in your schema. These constraints are detailed below, assuming you’re using the default namespace value which is conduktor.​. If you have changed the namespace value in the Interceptor configuration, please change the key name in your schema accordingly.
KeyTypeDefaultDescription
conduktor.keySecretIdSecret key templateUnique identifier for the secret key, and can be a template for crypto shredding use cases.
conduktor.algorithmAlgorithmAES128_GCMAlgorithm to use to encrypt this field.
conduktor.tagsList[String]Fields tagged with a matching tag from your Interceptor will be encrypted using the keySecretId and algorithm specified in the schema.
If these are not defined in the schema, the defaultKeySecretId and defaultAlgorithm from the Interceptor configuration will be used.
If your field meets one of these three conditions, then it will be encrypted:
  1. This field has a keySecretId set in the schema
  2. This field has a algorithm set in the schema
  3. This field has a set of tags set in the schema, and one of them is part of the tags list specified in the Interceptors.
Check out the encryption example.

Secret keys

Mustache template In all the encryption plugins, you can use mustache templates for the keySecretId. That way, your secret keys will be dynamic.
The value of a field will be replaced with the encrypted value; so using the keyId as the encryption value isn’t allowed.
PatternReplaced by
{{record.topic}}Name of the topic you’re encrypting the data of.
{{record.key}}Key of the encrypted record.
{{record.value.someValueFieldName}}Value of the field called someValueFieldName
If you’re doing field-level encryption, please ensure that someValueFieldName is not included in the fields to encrypt. Otherwise, you will not be able to decrypt it.
{{record.value.someList[0].someValueField}}Value of the field called someValueFieldName, in the first element of the list someList
{{record.header.someHeader}}Value of the header called someHeader
Here’s a record example:
# Header
someHeader=myHeader

# Key
myKey

# Value
{
  "someValueFieldName": "I",
  "someList": [{ "someValueField": "love" }, { "someValueField": "Kafka" }]
}
You can set "keySecretId": "{{record.topic}}-{{record.header.someHeader}}-{{record.key}}" - this will create an encryption key called myTopic-myHeader-myKey in memory. If you want this key to be stored in your Vault KMS, you can set: "keySecretId": "vault-kms://https://vault:8200/transit/keys/{{record.topic}}-{{record.header.someHeader}}-{{record.key}}".

KMS integration

Any keySecretId that doesn’t match one of the schemas detailed below will be rejected and the encryption operation will fail.
Keys are strings that start with a letter followed by a combination of letters, underscores (_), hyphens (-) and numbers. Special characters are not allowed.They also work with the mustache pattern.
If you want to make sure the key is well created in your KMS, you have to (1) make sure you have configured the connection to the KMS and (2) use the following format as keySecretId:
KMSKMS identifier prefixKey URI formatExample
In-Memoryin-memory-kms://in-memory-kms://<key-id>in-memory-kms://my-password-key-id
Azureazure-kms://azure-kms://<scheme>://<key-vault-name>.vault.azure.net/keys/<object-name>/<object-version>azure-kms://https://my-key-vault.vault.azure.net/keys/conduktor-gateway/4ceb7a4d1f3e4738b23bea870ae8745d
AWSaws-kms://aws-kms://arn:aws:kms:<region>:<account-id>:key/<key-id>aws-kms://arn:aws:kms:us-east-1:123456789012:key/password-key-id
GCPgcp-kms://gcp-kms://projects/<project-id>/locations/<location-id>/keyRings/<key-ring-id>/cryptoKeys/<key-id>gcp-kms://projects/my-project/locations/us-east1/keyRings/my-key-ring/cryptoKeys/password-key-id
Gatewaygateway-kms://gateway-kms://<key-id> (uses master key from KMS above)gateway-kms://user-{{record.key}}
Vault (Transit)vault-kms://vault-kms://[scheme://]<vault-host>/transit/keys/<key-id>vault-kms://https://vault:8200/transit/keys/password-key-id
Vault Transformvault-transform://vault-transform://<role-name>vault-transform://password-role
Test tokenizationtest-tokenization://test-tokenization://<role-name>test-tokenization://password-role
In-memory and test-tokenization modes are for testing and development purposes only. Test tokenization also requires GATEWAY_FEATURE_FLAGS_TEST_TOKENIZATION to be set to TRUE
If not specified, vault KMS scheme defaults to https. This means that vault-kms://https://vault:8200/transit/keys/password-key-id and vault-kms://vault:8200/transit/keys/password-key-id are identical.
Keys are strings that start with a letter followed by a combination of letters, underscores (_), hyphens (-) and numbers. Special characters are not allowed. Keys also work with the Mustache pattern described above.

Tokenization

Tokenization is an alternative to encryption for protecting sensitive data in Kafka messages. Instead of encrypting values, sensitive data is replaced with tokens, while also storing the original values securely in HashiCorp Vault’s transform secrets engine. Key benefits of tokenization:
  • Deterministic tokens: the same input always generates the same token, enabling queries on tokenized data
  • Format-preserving transformation: tokens can maintain the format of the original data
Tokenization is configured using the existing encryption Interceptors (EncryptPlugin, DecryptPlugin, etc.) but only works with the Transform Secrets Engine in Vault Enterprise (find out how to configure Vault). To tokenize data, use the encryption Interceptors with a vault-transform:// prefix in your keySecretId. To de-tokenize and retrieve the original values, use the decryption Interceptors with the same configuration. Here’s a sample field configured for tokenization:
fields:
  - fieldName: "email"
    keySecretId: "vault-transform://email-key"

Supported algorithms

  • AES128_GCM (default)
  • AES128_EAX
  • AES256_EAX
  • AES128_CTR_HMAC_SHA256
  • AES256_CTR_HMAC_SHA256
  • CHACHA20_POLY1305
  • XCHACHA20_POLY1305
  • AES256_GCM

Choosing an encryption algorithm

Gateway supports multiple encryption algorithms, with AES128_GCM as the default. When selecting an algorithm, consider your security requirements, performance needs, and message volume. Default algorithm: AES128_GCM AES128_GCM is the default algorithm and is suitable for most use cases. However, it has an important security limitation:
Security consideration: When the same DEK (Data Encryption Key) is used to encrypt approximately 4 billion (2³²) messages, AES-GCM’s security guarantees degrade due to nonce collision risks. According to NIST Special Publication 800-38D (Section 8.3), nonce collisions may expose encryption keys, compromising the confidentiality and integrity of data encrypted with that key.For high-traffic scenarios, this threshold can be reached quickly. For example:
  • At 10-50MB/s with 1KB message sizes, the 4-billion-message threshold can be reached in just over 24 hours
  • Each unique keySecretId uses its own DEK, so the limit applies per key, not globally
If you expect a single DEK to encrypt more than ~2³² messages, consider:
  • Using a different algorithm (see recommendations below)
  • Implementing DEK rotation before reaching the threshold
  • Using multiple keySecretId values to distribute the message count across multiple DEKs
When to keep the default (AES128_GCM)
  • Low to moderate message volume per DEK (well below 2³² messages per key)
  • Need for compatibility with existing AES-GCM implementations
  • Hardware acceleration (AES-NI) is available, providing good performance
If you expect a single DEK to encrypt more than ~2³² messages, consider using alternative algorithms or implementing DEK rotation. Consult your cryptographic library documentation and security requirements to choose the appropriate algorithm for your use case.

Key rotation

Gateway uses envelope encryption with two types of keys: DEK (Data Encryption Key) and KEK (Key Encryption Key). Understanding how and when to rotate these keys is important for maintaining security. KEK rotation
Customer responsibility: KEK (Key Encryption Key) rotation must be performed by the customer and is not handled automatically by Gateway. This is the customer’s responsibility and must be done through your KMS provider (AWS KMS, Azure Key Vault, HashiCorp Vault, GCP KMS, etc.).
When to rotate KEK You should rotate your KEK based on:
  • Security policies: Follow your organization’s key rotation policies and compliance requirements
  • DEK encryption frequency: If DEKs are being encrypted frequently (high message volume), consider more frequent KEK rotation
  • Security incidents: Rotate immediately if a KEK is suspected to be compromised
  • Best practices: Many organizations rotate KEKs annually or quarterly, but the frequency should match your security requirements
After rotating a KEK, Gateway will automatically use the new KEK version for encrypting new DEKs. However, existing EDEKs encrypted with the old KEK version will still be de-cryptable as long as the old KEK version remains available in your KMS.Most KMS providers retain old key versions for backward compatibility, allowing you to decrypt historical data while new data uses the rotated key.

Supported compression types

  • none
  • gzip
  • snappy
  • lz4
  • zstd

Encryption error policy

This policy determines the actions when an encryption Interceptor encounters a record that’s already encrypted.
Error policyDescription
fail_on_encryptedThe encryption operation will fail with an exception when encountering already encrypted records. This is the default behavior and maintains backward compatibility.
skip_already_encryptedThe encryption Interceptor will skip already encrypted records and pass them through unchanged. This enables chaining multiple encryption Interceptors together.
Example configuration with error policy:
{
  "name": "mySchemaBasedEncryptPlugin",
  "pluginClass": "io.conduktor.gateway.interceptor.EncryptSchemaBasedPlugin",
  "config": {
    "topic": "sensitive-data",
    "defaultKeySecretId": "vault-kms://vault:8200/transit/keys/default-key",
    "defaultAlgorithm": "AES256_GCM",
    "tags": ["PII", "ENCRYPT"],
    "errorPolicy": "skip_already_encrypted",
    "kmsConfig": {
      "vault": {
        "uri": "http://vault:8200",
        "token": "${VAULT_TOKEN}"
      }
    }
  }
}

Decryption configuration

Now that your fields or payload are encrypted, you can decrypt them using the Interceptor DecryptPlugin.
KeyTypeDefaultDescription
topicString.*Topics matching this regex will have the Interceptor applied.
schemaRegistryConfigSchemaRegistryConfiguration of your schema registry, is needed if you want to decrypt into Avro, JSON or Protobuf schemas.
kmsConfigKMSConfiguration of one or multiple KMS
recordValueFieldsList[String]Only for field-level encryption - List of fields to decrypt in the value. If empty, we decrypt all the encrypted fields.
recordKeyFieldsList[String]Only for field-level encryption - List of fields to decrypt in the key. If empty, we decrypt all the encrypted fields.
recordHeaderFieldsList[String]Only for field-level encryption - List of headers to decrypt. If empty, we decrypt all the encrypted headers.
enableAuditLogOnErrorBooleantrueThe audit log will be enabled when an error occurs during encryption/decryption
errorPolicyStringreturn_encryptedDetermines the action if there is an error during decryption. The options are return_encrypted, fail_fetch and crypto_shred_safe_fail_fetch See Decryption Error Policy section for more details.
throttleTimeMsInteger0When decryption fails, apply client throttling for the specified time in milliseconds. This helps prevent the overwhelming of the system during decryption failures. Find out more about client throttling

Decryption error policy

This policy determines the action if there is an error during decryption.
Error policyDescription
return_encryptedThe encrypted data is returned to the client.
fail_fetchThe client will receive an error for the fetch and no data. When using the Gateway KMS, data that fails to decrypt because no key was found will use the policy return_encrypted instead (supports crypto shredding for just this KMS system).
crypto_shred_safe_fail_fetchThe client will receive an error for the fetch and no data. Data that fails to decrypt because no key was found will use the policy return_encrypted instead (supports crypto shredding for all KMS systems).
Gateway supports Confluent-like and AWS Glue schema registries.
KeyTypeDefaultDescription
typestringCONFLUENTThe type of schema registry to use: choose CONFLUENT (for Confluent-like schema registries including OSS Kafka) or AWS for AWS Glue schema registries.
additionalConfigsmapAdditional properties maps to specific security-related parameters. For enhanced security, you can hide the sensitive values using environment variables as secrets.
Confluent-likeConfiguration for Confluent-like schema registries
hoststringURL of your schema registry.
cacheSizestring50Number of schemas that can be cached locally by this Interceptor so that it doesn’t have to query the schema registry every time.
AWS GlueConfiguration for AWS Glue schema registries
regionstringThe AWS region for the schema registry, e.g. us-east-1.
registryNamestringThe name of the schema registry in AWS (leave blank for the AWS default of default-registry).
basicCredentialsstringAccess credentials for AWS.
AWS credentialsAWS credential configuration
accessKeystringThe access key for the connection to the schema registry.
secretKeystringThe secret key for the connection to the schema registry.
validateCredentialsbooltruetrue / false flag to determine whether the credentials provided should be validated when set.
accountIdstringThe Id for the AWS account to use.
If you don’t supply a basicCredentials section for the AWS Glue schema registry, the client will attempt to find the connection information it needs from the environment (see AWS docs for details ) and the credentials required can be passed this way to Gateway as part of its core configuration.

KMS configuration

Find out how to configure the different KMS within your encrypt and decrypt Interceptors.

Configuration properties

PropertyTypeDefaultDescription
keyTtlMslong3600000Key’s time-to-live in milliseconds. The default is 1 hour. Disable the cache by setting it to 0.

Choose your KMS provider

ProviderWhen to useKey consideration
In-memoryLocal development and testing onlyKeys don’t persist - data becomes unreadable after Gateway restart
GatewayCrypto shredding with high-volume per-record keysStores encrypted keys locally, reducing KMS costs for millions of unique keys
AWS KMSAWS infrastructureNative IAM integration, no credential management needed
Azure Key VaultAzure infrastructureManaged identity support, integrates with Azure services
Vault KMSMulti-cloud or on-premisesMost flexible authentication (11 methods), works anywhere
GCP KMSGoogle Cloud infrastructureService account integration, follows GCP security model
Fortanix KMSFIPS 140-2 compliance requiredHardware security module (HSM) backed, meets strict regulatory requirements

In-memory KMS

This should not be used on production data.
Keys in in-memory KMS are not persisted, this means that if you do one of the following, you won’t be able to decrypt old records, loosing the data.
  • Use a Gateway cluster with more than a single node or
  • restart Gateway or
  • change the Interceptor configuration

Gateway KMS

This KMS type is effectively a delegated storage model and is designed to support encryption use cases which generate unique secret Ids per record or even field (typically via the Mustache template support for a secret Id). This technique is used in crypto-shredding type scenarios e.g. encrypting records per user with their own key.It provides the option to leverage your KMS for security via a single master key, but efficiently and securely store many per-record level encryption keys (DEKs) in the Gateway managed store. For some architectures this can provide performance and cost savings for encryption use cases which generate a high volume of secret key Ids.
KeyTypeDescription
masterKeyIdStringThe master key secret Id used to encrypt any keys stored in the Gateway managed storage. This is in the same format as the keySecretId that’s used for encryption and the valid values are the same.
maxKeysNumberThe maximum number of secret Id references to be cached in memory for re-use. To avoid creating new encryption keys (DEKs), this needs to be larger than the total number of expected secret Ids. By default, it’s the same as maxKeys in cache config or 1000,000, if maxKeys isn’t set.
The masterKeyId is used to secure every key for this configuration, stored by Gateway. Find out more about the secret key formats. You have to also supply a valid configuration for the KMS type referenced by the master key so this can be used.If this key is dropped from the backing KMS, then all keys stored by Gateway for that master key will become unreadable.Gateway KMS encryption exampleHere’s a sample configuration for the Gateway KMS using a Vault-based master key:
"kmsConfig": {
   "gateway": {
      "masterKeyId": "vault-kms://vault:8200/transit/keys/applicants-1-master-key".
      "maxKeys" : 10000000
   },
   "vault": {
      "uri": "https://vault:8200",
      "token": "my-vault-token",
      "trustStore": {
        "trustStorePath": "/security/truststore.jks"
      }
   }
}
This can then be used to encrypt a field using gateway-kms:// as the secret key type:
"recordValue": {
   "fields": [
      {
         "fieldName": "name",
         "keySecretId": "gateway-kms://fieldKeySecret-name-{{record.key}}"
      }
   ]
}
When processing a record for the first time using this configuration, Gateway will:
  1. generate a DEK to encrypt the field data,
  2. turn it into an EDEK by encrypting with the masterKeyId secret from vault and
  3. store the EDEK in Gateway storage.
If a record key was 123456, the associated EDEK would be stored on a kafka record with the following key:
{"algorithm":"AES128_GCM","keyId":"gateway-kms://fieldKeySecret-name-123456","uuid":"<UNIQUE_PER_EDEK_GENERATED>"}
Multiple records produced against this config would cause multiple EDEKs to be saved in the Gateway storage (due to the {{record.key}} template giving a unique key for each Kafka record key).If there are multiple Gateway nodes running, it’s also possible for multiple DEKs/EDEKs to be generated for the same record key. Two nodes processing different records with the same record key at the same time could both assume they were generating a DEK/EDEK for the first time. In this scenario, there would be two EDEKs in the Gateway storage with the same keyId but they would each have a different UUID.For example:
{"algorithm":"AES128_GCM","keyId":"gateway-kms://fieldKeySecret-name-123456","uuid":"2cd8125a-b55f-4214-a528-be3c9b47519b"}
{"algorithm":"AES128_GCM","keyId":"gateway-kms://fieldKeySecret-name-123456","uuid":"d8fcccf3-8480-4634-879a-48deed4e0e72"}
Nonetheless, there will only ever be one master key stored in the vault KMS, which is used to encrypt every DEK.This feature provides flexibility for your KMS storage and key management setups - and is particularly useful for high volume crypto shredding.Decryption using Gateway KMSWhen using the gateway-kms secret key Id type, the decryption configuration used to decrypt the data has to also specify the masterKeyId, so that it can securely decrypt the keys stored in the local Gateway storage.Here’s a sample setup:
"config": {
   "topic": "secure-topic",
   "kmsConfig": {
      "gateway": {
         "masterKeyId": "vault-kms://vault:8200/transit/keys/secure-topic-master-key"
      },
      "vault": {
         "uri": "https://vault:8200",
         "token": "my-token-for-vault",
         "trustStore": {
           "trustStorePath": "/security/truststore.jks"
         }
      }
   }
}
Crypto shreddingWhen using the gateway-kms secret key Id type, you can efficiently crypto shred EDEKs in the Gateway storage, so that anyone using the decryption plugin will immediately lose access to the associated encrypted data.To do this, scan the Gateway storage Kafka topic (by default, _conduktor_gateway_encryption_keys) for every message matching the associated qualified secret Id.For example, a qualified secretId of gateway-kms://fieldKeySecret-name-123456 might have the following keys:
{"algorithm":"AES128_GCM","keyId":"gateway-kms://fieldKeySecret-name-123456","uuid":"2cd8125a-b55f-4214-a528-be3c9b47519b"}
{"algorithm":"AES128_GCM","keyId":"gateway-kms://fieldKeySecret-name-123456","uuid":"d8fcccf3-8480-4634-879a-48deed4e0e72"}
Publishing a message for each of these keys back to the same topic with a value of null (i.e. a tombstone) will effectively perform Crypto Shredding.This process won’t prevent the creation of new keys if new messages are sent using the same record key; it only ensures that messages using the crypto shredded keys remain unrecoverable.

AWS KMS

To set your AWS KMS, include this section in your Interceptor config, below aws.You can use one of these two authentication methods:
  • basic authentication or
  • session.
Make sure to follow the right method and provided the correct properties.For enhanced security, you can hide the sensitive values using environment variables as secrets.
KeyTypeDescription
Basic authentication
basicCredentials.accessKeyStringAccess key.
basicCredentials.secretKeyStringSecret key.
Session
sessionCredentials.accessKeyStringAccess key.
sessionCredentials.secretKeyStringSecret key.
sessionCredentials.sessionTokenStringSession token.
Managed identityConfigure the KMS from the context, not using variables. This will be overwritten if a specific KMS is configured within the Interceptor.

Azure KMS

To set your Azure KMS, include this section in your Interceptor config, below azure.You can use one of these two authentication methods:
  • token or
  • username and password.
Make sure you’ve followed the right method, and that you’ve provided the correct properties.For enhanced security, you can hide the sensitive values using environment variables as secrets.
KeyTypeDescription
Token
tokenCredential.clientIdstringClient ID.
tokenCredential.tenantIdstringTenant ID.
tokenCredential.clientSecretstringClient secret.
Username and password
usernamePasswordCredential.clientIdstringClient ID.
usernamePasswordCredential.tenantIdstringTenant ID.
usernamePasswordCredential.usernamestringUsername.
usernamePasswordCredential.passwordstringPassword.
Managed identityConfigure the KMS from the context, and not using variables. This will be overwritten if a specific KMS is configured within the Interceptor.

Fortanix KMS

To set your Fortanix KMS, include this section in your Interceptor config under fortanix:
KeyTypeDescription
typeStringAuthentication method: API_KEY or USERNAME_PASSWORD
basePathStringFortanix DSM base URL: https://support.fortanix.com/docs/fortanix-dsm-saas-global-availability-map
apiKeyStringtype is API_KEY: Fortanix API key. Gateway only supports AES type keys.
usernameStringtype is USERNAME_PASSWORD: Fortanix username
passwordStringtype is USERNAME_PASSWORD: Fortanix password
Alternatively, you can use the environment variables for sensitive credentials and configuration values. Set them in your Gateway deployment and they will be resolved at runtime. In that case, you don’t have to supply a Fortanix {} block in the KMS config.If the specified encryption key doesn’t exist in Fortanix DSM, Gateway will automatically create it with the following configuration:
  • Key type: AES symmetric key
  • Mode: CBC
  • Key Size: 256
  • Permissions: ENCRYPT, DECRYPT

Google Cloud Platform KMS

To set your Google Cloud Platform (GCP) KMS, include this section in your Interceptor config, under gcp:You must first configure the service account key file.For enhanced security, you can hide the sensitive values using environment variables as secrets.
KeyTypeDescription
serviceAccountCredentialsFilePathStringService account key file in GCP.
enableDetailedErrorLoggingBooleanUse with caution - error responses may contain sensitive information. Defines whether to enable detailed GCP KMS error response logging. When enabled, full error responses from GCP KMS will be logged. When disabled (default), only safe fields (code, status, reason, domain) will be logged.
Managed identityConfigure the KMS from the context, and not using variables. This will be overwritten if a specific KMS is configured within the Interceptor.

Vault KMS

Gateway supports two data security backends for Vault:To configure Vault, add a new Vault section in your Interceptor configuration. For enhanced security, you can hide the sensitive values using the environment variables as secrets.
KeyTypeDescription
uriStringVault server base URI. Example: vault:8200 (assumed https for backward compatibility) or https://vault:8200 or http://vault:8200.
namespaceStringNamespace.
typeStringRequired for all types of VaultKMSConfig. Determines the type of authentication to use.
Supported types:
-TOKEN
-USERNAME_PASSWORD
-GITHUB
-LDAP
-APP_ROLE
-KUBERNETES
-GCP
-AWS_EC2_PKCS7
-AWS_EC2
-AWS_IAM
-JWT
trustStoreTrustStoreTrust store configuration (JKS format) for verifying the Vault server’s TLS certificate when using HTTPS with a privately-signed or internal certificate authority.
keyStoreKeyStoreKey store configuration (JKS format) containing the client certificate and private key for mutual TLS (mTLS) authentication when the Vault server requires client certificate verification.
Managed identityLoad authentication information from the below environment variables.
VAULT_URIVault server base URI.
VAULT_NAMESPACEVault namespace.
TrustStore
KeyTypeDescriptionEnvironment variable
trustStorePathStringPath to JKS trust store file containing the Certificate Authority certificate.VAULT_SSL_TRUST_STORE_PATH
trustStorePasswordStringPassword for the JKS trust store file containing the Certificate Authority certificate.VAULT_SSL_TRUST_STORE_PASSWORD
When using HTTPS Vault URIs without an explicitly configured trust store, Gateway will attempt to use the VAULT_SSL_TRUST_STORE_PATH and VAULT_SSL_TRUST_STORE_PASSWORD environment variables if available.
KeyStore
KeyTypeDescriptionEnvironment variable
keyStorePathStringPath to JKS key store file containing the client certificate.VAULT_SSL_KEY_STORE_PATH
keyStorePasswordStringPassword for the JKS key store file containing the client certificate.VAULT_SSL_KEY_STORE_PASSWORD
The Vault client library requires that JKS key stores contain both a private key and client certificate with identical passwords. When using HTTPS Vault URIs without an explicitly configured key store, Gateway will attempt to use the VAULT_SSL_KEY_STORE_PATH and VAULT_SSL_KEY_STORE_PASSWORD environment variables if available.
Vault authentication types
KeyTypeDescription
Token authenticationUse token authentication.
typeStringMust be TOKEN. Indicates the type of authentication.
tokenStringSecurity token for accessing Vault.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be TOKEN. Indicates the type of authentication.
VAULT_TOKENToken to use for accessing Vault.
Username and passwordUse username and password authentication.
typeStringMust be USERNAME_PASSWORD. Indicates the type of authentication.
usernameStringUsername for accessing Vault.
passwordStringPassword for accessing Vault.
userpassAuthMountString(Optional) Mount path for the userpass auth method.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be USERNAME_PASSWORD. Indicates the type of authentication.
VAULT_USERNAMEUsername for accessing Vault.
VAULT_PASSWORDPassword for accessing Vault.
VAULT_AUTH_MOUNT(Optional) Mount path for the userpass auth method.
GitHub authenticationUse GitHub token authentication.
typeStringMust be GITHUB. Indicates the type of authentication.
tokenStringGitHub personal access token.
githubAuthMountString(Optional) Mount path for the GitHub auth method.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be GITHUB. Indicates the type of authentication.
VAULT_GITHUB_TOKENGitHub token for accessing Vault.
VAULT_AUTH_MOUNT(Optional) Mount path for the GitHub auth method.
LDAP authenticationUse LDAP authentication.
typeStringMust be LDAP. Indicates the type of authentication.
usernameStringLDAP username.
passwordStringLDAP password.
ldapAuthMountString(Optional) Mount path for the LDAP auth method.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be LDAP. Indicates the type of authentication.
VAULT_LDAP_USERNAMELDAP username.
VAULT_LDAP_PASSWORDLDAP password.
VAULT_AUTH_MOUNT(Optional) Mount path for the LDAP auth method.
AppRole authenticationUse AppRole authentication.
typeStringMust be APP_ROLE. Indicates the type of authentication.
roleIdStringRole ID for AppRole authentication.
secretIdStringSecret ID for AppRole authentication.
pathString(Optional) Mount path for the AppRole auth method.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be APP_ROLE. Indicates the type of authentication.
VAULT_APP_ROLE_IDRole ID for AppRole authentication.
VAULT_APP_SECRET_IDSecret ID for AppRole authentication.
VAULT_APP_PATH(Optional) Mount path for the AppRole auth method.
Kubernetes authenticationUse Kubernetes authentication.
typeStringMust be KUBERNETES. Indicates the type of authentication.
roleStringKubernetes role.
pathString(Optional) Token file path for the Kubernetes auth method.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be KUBERNETES. Indicates the type of authentication.
VAULT_KUBERNETES_ROLEKubernetes role.
VAULT_KUBERNETES_PATH(Optional) Token file path for the Kubernetes auth method.
GCP authenticationUse Google Cloud Platform authentication.
typeStringMust be GCP. Indicates the type of authentication.
roleStringGCP role for authentication.
jwtStringJWT token issued by Google Cloud Platform.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be GCP. Indicates the type of authentication.
VAULT_GCP_ROLEGCP role for authentication.
VAULT_GCP_JWTJWT token for accessing Vault.
AWS EC2 authentication (PKCS7)Use AWS EC2 PKCS7 authentication.
typeStringMust be AWS_EC2_PKCS7. Indicates the type of authentication.
roleStringAWS role for EC2 authentication.
pkcs7StringPKCS7 identity document.
nonceString(Optional) Nonce value for EC2 authentication.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be AWS_EC2_PKCS7. Indicates the type of authentication.
VAULT_AWS_ROLEAWS role for EC2 authentication.
VAULT_AWS_PKCS7PKCS7 identity document.
VAULT_AWS_NONCE(Optional) Nonce value for EC2 authentication.
VAULT_AUTH_MOUNT(Optional) Mount path for the AWS EC2 PKCS7 auth method.
AWS EC2 authenticationUse AWS EC2 identity authentication.
typeStringMust be AWS_EC2. Indicates the type of authentication.
roleStringAWS role for EC2 authentication.
identityStringAWS identity document.
signatureStringAWS signature for authentication.
nonceStringNonce value for EC2 authentication.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be AWS_EC2. Indicates the type of authentication.
VAULT_AWS_ROLEAWS role for EC2 authentication.
VAULT_AWS_IDENTITYAWS identity document.
VAULT_AWS_SIGNATUREAWS signature for authentication.
VAULT_AWS_NONCE(Optional) Nonce value for EC2 authentication.
VAULT_AUTH_MOUNT(Optional) Mount path for the AWS EC2 auth method.
AWS IAM authenticationUse AWS IAM Authentication.
typeStringMust be AWS_IAM. Indicates the type of authentication.
roleStringAWS role for IAM authentication.
iamRequestUrlStringIAM request URL for authentication.
iamRequestBodyStringIAM request body for authentication.
iamRequestHeadersStringIAM request headers for authentication.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be AWS_IAM. Indicates the type of authentication.
VAULT_AWS_ROLEAWS role for IAM authentication.
VAULT_AWS_IAM_REQUEST_URLIAM request URL for authentication.
VAULT_AWS_IAM_REQUEST_BODYIAM request body for authentication.
VAULT_AWS_IAM_REQUEST_HEADERSIAM request headers for authentication.
JWT authenticationUse JWT authentication.
typeStringMust be JWT. Indicates the type of authentication.
jwtStringJWT token for authentication.
providerStringJWT provider for authentication.
roleStringJWT role for authentication.
Managed identityLoad authentication information from the below environment variables.
VAULT_AUTH_TYPEStringMust be JWT. Indicates the type of authentication.
VAULT_JWTJWT token for authentication.
VAULT_JWT_PROVIDERJWT provider for authentication.
Example:
{
  "type": "APP_ROLE",
  "uri": "https://vault.example.com",
  "roleId": "my-role-id",
  "secretId": "my-secret-id",
  "trustStore": {
    "trustStorePath": "/security/truststore.jks"
  }
}
Connection backoffIf there’s a connection failure (e.g., a network partition error), Gateway will automatically keep trying to reconnect to Vault.You can adjust this by modifying the kmsConfig.vault.connectionBackoff object:
KeyTypeDefaultDescription
backoffDelayInteger5Initial retry delay for Vault health checks
backoffMaxDelayInteger30Maximum retry delay
backoffChronoUnitStringSECONDSTime unit for delays (SECONDS, MILLIS, etc.)
backoffDelayFactorDouble1.1Exponential backoff multiplier
Example:
{
  "type": "TOKEN",
  "uri": "https://vault:8200",
  "token": "${VAULT_TOKEN}",
  "connectionBackoff": {
    "backoffDelay" : 5,
    "backoffMaxDelay": 30,
    "backoffChronoUnit" : "SECONDS",
    "backoffDelayFactor" : 1.1
  }
}
Configuring Vault Transit Secrets EngineHere’s the minimum Vault policy required for encryption and decryption to work with vault:// prefixed keys:
path "transit/*" {
  capabilities = ["read", "update"]
}
Configuring Vault Transform Secrets EngineHere’s the minimum Vault policy required for encryption and decryption to work with vault-transform:// prefixed keys,
path "transform/*" {
  capabilities = ["read", "update"]
}
When using the Vault transform secrets engine, you can configure additional caching using the kmsConfig.vault.transformEngineCache object:
KeyTypeDefaultDescription
enabledBooleanfalseWhether to enable caching
populateOnSecureBooleanfalseCache tokens while securing, not just de-securing
ttlMsLong600000Token cache TTL in milliseconds after last access (1 hour default)
maxSizeLong1000000Maximum number of tokens to cache (1 million default).
Example:
{
  "type": "TOKEN",
  "uri": "https://vault:8200",
  "token": "${VAULT_TOKEN}",
  "transformEngineCache": {
    "enabled" : true,
    "populateOnSecure": false,
    "ttlMs": 600000,
    "maxSize": 1000000
  }
}

Client throttling configuration

When encryption or decryption operations fail, you can configure client throttling to help protect your system from being overwhelmed during error conditions. To enable client throttling, set the throttleTimeMs parameter in your encryption/decryption Interceptor config:
  • throttleTimeMs = 0 (default): no throttling - clients receive immediate error responses
  • throttleTimeMs > 0: clients will be throttled for the specified time in milliseconds when operations fail
When throttleTimeMs is configured with a value greater than 0:
  • cluster stability is protected: Gateway automatically throttles clients to prevent system overload
  • client compliance is built-in: Kafka clients automatically pause for the specified throttle time between requests
  • failure cascades are prevented: throttling reduces retry pressure, allowing brokers to recover
Configuration example:
{
  "name": "myEncryptPlugin",
  "pluginClass": "io.conduktor.gateway.interceptor.EncryptPlugin",
  "config": {
    "throttleTimeMs": 5000
  }
}