Creates a group with members and permissions in Console.
CLI
Terraform
---apiVersion: iam/v2kind: Groupmetadata: name: developers-aspec: displayName: "Developers Team A" description: "Members of the Team A - Developers" externalGroups: - "LDAP-GRP-A-DEV" externalGroupRegex: - "LDAP*" members: - member1@company.org - member2@company.org permissions: - resourceType: TOPIC cluster: shadow-it patternType: PREFIXED name: toto- permissions: - topicViewConfig - topicConsume - topicProduce
Groups checks:
spec.description is optional
spec.externalGroups is a list of LDAP or OIDC groups to sync with this Console Group
Members added this way will not appear in spec.members but spec.membersFromExternalGroups instead
spec.externalGroupRegex is a list of regex patterns that can match to a series of LDAP or OIDC groups to sync with this Console group. Members added this way will not appear in spec.members list.
Supports regex patterns for dynamic group matching (e.g., ^TEAM-.* to match all groups starting with “TEAM-”)
spec.membersFromExternalGroups is a read-only list of members added through spec.externalGroups or spec.externalGroupRegex
spec.members must be email addresses of members you wish to add to this group
spec.permissions are valid permissions as defined in Permissions
spec.policiesRef (optional), if set, has to be a valid list of ResourcePolicy.
Conduktor CLI does not verify that your Kafka configuration (spec.bootstrapServers, spec.properties, etc.) is valid. You need to check that in Console directly.
This section lets you configure the Kafka provider for this KafkaCluster.Confluent CloudProvide your Confluent Cloud details to get additional features in Console:
---apiVersion: console/v3kind: Alertmetadata: name: messages-in-dead-letter-queue group: support-team # will be the owner of the alert, can be either a user, a group or an appInstance # user: user@company.org # appInstance: my-app-instancespec: cluster: my-dev-cluster type: TopicAlert topicName: wikipedia-parsed-DLQ metric: MessageCount operator: GreaterThan threshold: 0 description: "Alert for monitoring messages in dead letter queue" displayName: "DLQ message count alert" destination: type: Slack channel: "alerts-p1"
Alert checks:
metadata.user|metadata.group|metadata.appInstance has to be a valid user, group or appInstance.
metadata.destination.type can be either Slack, Teams or Webhook. When set to:
Slack: has to be a valid Slack channel ID
Teams: has to be a valid Teams webhook URL
Webhook:
spec.destination.url has to be a valid URL
spec.destination.method has to be GET, POST, PUT or DELETE
spec.destination.headers (optional) has to be key-value pairs of HTTP headers
spec.destination.authentication.type (optional) has to be BasicAuth (define spec.destination.authentication.username and spec.destination.authentication.password) or BearerToken (define spec.destination.authentication.token).
spec.cluster has to be a valid KafkaCluster name.
spec.type has to be BrokerAlert,TopicAlert, KafkaConnectAlert or ConsumerGroupAlert. When set to:
BrokerAlert: spec.metric has to be MessageIn, MessageOut, MessageSize, OfflinePartitionCount, PartitionCount, UnderMinIsrPartitionCount or UnderReplicatedPartitionCount.
TopicAlert: spec.metric has to be MessageCount, MessageIn, MessageOut or MessageSize and the spec.topicName has to be a Kafka topic that the owner can access.
KafkaConnectAlert:spec.metric has to be FailedTaskCount;spec.connectName has to be a valid KafkaConnect cluster associated to this spec.cluster Kafka cluster and spec.connectorName has to be a Kafka Connect Connector that the owner can access.
ConsumerGroupAlert:spec.metric has to be OffsetLag or TimeLag and spec.consumerGroupName has to be a Kafka Consumer group that the owner can access.
spec.metric depends on the spec.type.
spec.operator has to be GreaterThan, GreaterThanOrEqual, LessThan, LessThanOrEqual or NotEqual.
spec.threshold has to be a number.
spec.description (optional) provides a text description of the alert.
spec.displayName (optional) provides a display name for the alert.
spec.disable (optional), default is false. Has to be true or false.
Alert resolution: Alerts resolve (stop firing) when the metric value no longer meets the threshold condition defined by the comparison operator (GreaterThan, LessThan, etc.).
---apiVersion: console/v2kind: PartnerZonemetadata: name: external-partner-zonespec: displayName: External Partner Zone description: An external partner to exchange data with. url: https://partner1.com partner: name: John Doe role: Data analyst email: johndoe@partner.io phone: 07827 837 177 cluster: cdk-gateway underlyingCluster: cluster1 authenticationMode: serviceAccount: partner-external-partner type: PLAIN vclusterName: custom-vcluster-name topics: - name: topic-a backingTopic: kafka-topic-a permission: WRITE - name: topic-b backingTopic: kafka-topic-a permission: READ trafficControlPolicies: maxProduceRate: 1e+06 maxConsumeRate: 1e+06 limitCommitOffset: 30 headers: addOnProduce: - key: partner-name value: external-analytics-partner overrideIfExists: false - key: client-info value: "Client:{{clientId}}, from IP:{{userIp}}" overrideIfExists: true - key: kafka-api value: "Kafka API Key:{{apiKey}}, version {{apiKeyVersion}}" overrideIfExists: true - key: produce-metadata value: "User:{{user}}, via Gateway:{{gatewayHost}}, at timestampMillis:{{timestampMillis}}" overrideIfExists: true removeOnConsume: - keyRegex: my_team_prefix.*
Partner Zone checks:
spec.displayName is mandatory.
spec.description, spec.url and spec.partner (optional), useful for context information.
spec.cluster has to be a valid Console cluster technical ID with the Provider configured as Gateway.
spec.underlyingCluster has to be a valid Console cluster technical ID where the cluster has to be defined under the Gateway defined by the spec.gatewayClusterId.
When not specified, the value is inferred to be equal to the spec.gatewayClusterId field, hence selecting the main cluster behind Gateway by default.
spec.authenticationMode.type must be one of [PLAIN, OAUTHBEARER, MTLS]. See authentication examples for detailed configuration.
spec.authenticationMode.serviceAccount requirements depend on the authentication type:
PLAIN: Any unique identifier for your partner (e.g., partner-external-partner). This will be created as a local Gateway service account automatically if it doesn’t exist.
OAUTHBEARER: Must match the “sub” claim in the partner’s OAuth/OIDC token (e.g., oauth-partner-service-account). The partner needs to authenticate using their OAuth provider.
MTLS: Must match the client’s Distinguished Name (DN) from their certificate, unless you’ve modified GATEWAY_SSL_PRINCIPAL_MAPPING_RULES (e.g., CN=partner-client,OU=Engineering,O=PartnerCorp,C=US).
spec.vclusterName (optional), custom name for the Virtual Cluster. If not provided, it will be auto-generated.
topics[].name is the name of the topic as it should appear to your external partner. This can be different from backingTopic.
topics[].backingTopic is the internal name of the topic that you want to share.
topics[].permission has to be set to either READ or WRITE (which includes READ).
trafficControlPolicies.maxProduceRate (optional), sets the maximum rate (in bytes/s) at which the partner can produce messages to the topics per Gateway node.
trafficControlPolicies.maxConsumeRate (optional), sets the maximum rate (in bytes/s) at which the partner can consume messages from the topics per Gateway node.
trafficControlPolicies.limitCommitOffset (optional), sets the maximum number of commit requests (in requests/minute) that the partner can make per Gateway node.
headers.addOnProduce (optional), list of headers to inject when producing messages. The value field supports special variables: {{user}}, {{userIp}}, {{clientId}}, {{apiKey}}, {{apiKeyVersion}}, {{gatewayHost}}, {{timestampMillis}}.
headers.removeOnConsume (optional), list of header key patterns (regex) to remove when consuming messages.
Side effects in Console and Kafka:Once created or updated, the following fields will be made available:
metadata.updatedAt (by consecutive get from the CLI/API).
metadata.status (by consecutive get from the CLI/API.) Possible values are PENDING, READY or FAILED.
metadata.failedReason will be populated in case of FAILED status.
The service account will be created if it doesn’t exist and will be granted the permissions as declared in spec.topics.
The traffic control policies will be applied to the service account.
Create or update a data quality policy that applies rules to specific Kafka topics.
---apiVersion: v1kind: DataQualityPolicymetadata: name: user-data-validation-policyspec: displayName: User data validation policy description: Validates all user data before it is produced to topics rules: - email-validation-rule - user-schema-rule targets: - cluster: main-cluster topic: users patternType: LITERAL - cluster: main-cluster topic: user-events- patternType: PREFIXED actions: block: enabled: true mark: enabled: false
DataQualityPolicy checks:
metadata.name is mandatory and must be unique.
spec.displayName is mandatory and is shown in the Console UI.
spec.description is optional but recommended to explain the policy’s purpose.
spec.rules is a list of DataQualityRule names to apply (references to rules created separately).
spec.targets is mandatory and defines which topics the policy applies to:
cluster is a valid Kafka cluster technical ID.
topic is the topic name or prefix.
patternType has to be either LITERAL (exact match) or PREFIXED (prefix match).
spec.actions defines what happens when a rule violation occurs:
block.enabled (optional, defaults to false) - when true, blocks messages that violate rules from being produced. Only available if the configured cluster is a Conduktor Gateway cluster with the appropriate license.
mark.enabled (optional, defaults to false) - when true, marks messages that violate rules with a header but allows production.
Side effects in Console and Kafka:Once created or updated, the following fields will be made available:
metadata.nameForMetrics (read-only identifier used in metrics).
metadata.group (read-only group identifier).
metadata.createdAt (read-only timestamp).
metadata.updatedAt (read-only timestamp).
metadata.createdBy (read-only user identifier).
metadata.updatedBy (read-only user identifier).
metadata.status (read-only status: failed, pending, or ready).
Permissions are used in groups and users and lets you configure access to any Kafka resource or Console feature.A permission applies to a certain resourceType which affects the required fields.
# Grants view and reset on all consumer groups starting with group-* on shadow-it cluster- resourceType: CONSUMER_GROUP cluster: shadow-it patternType: PREFIXED name: group- permissions: - consumerGroupView - consumerGroupReset
resourceType: CONSUMER_GROUP
cluster is a valid Kafka cluster
patternType is either PREFIXED or LITERAL
name is the name of the consumer group or consumer group prefix to apply the permissions to
permissions is a list of valid consumer group permissions