ConsoleGroup

  • API key(s): AdminToken
  • Managed with: UI, CLI, API, TF
  • Labels support: Missing
Creates a group with members and permissions in Console.
---
apiVersion: iam/v2
kind: Group
metadata:
  name: developers-a
spec:
  displayName: "Developers Team A"
  description: "Members of the Team A - Developers"
  externalGroups:
    - "LDAP-GRP-A-DEV"
  externalGroupRegex:
    - "LDAP*"
  members:
    - member1@company.org
    - member2@company.org
  permissions:
    - resourceType: TOPIC
      cluster: shadow-it
      patternType: PREFIXED
      name: toto-
      permissions:
        - topicViewConfig
        - topicConsume
        - topicProduce
Groups checks:
  • spec.description is optional
  • spec.externalGroups is a list of LDAP or OIDC groups to sync with this Console Group
    • Members added this way will not appear in spec.members but spec.membersFromExternalGroups instead
  • spec.externalGroupRegex is a list of regex patterns that can match to a series of LDAP or OIDC groups to sync with this Console group. Members added this way will not appear in spec.members list.
    • Supports regex patterns for dynamic group matching (e.g., ^TEAM-.* to match all groups starting with “TEAM-”)
  • spec.membersFromExternalGroups is a read-only list of members added through spec.externalGroups or spec.externalGroupRegex
  • spec.members must be email addresses of members you wish to add to this group
  • spec.permissions are valid permissions as defined in Permissions
Side effects in Console and Kafka:
  • Console
    • Members of the Group are given the associated permissions in the UI over the resources
    • Members of the LDAP or OIDC groups will be automatically added or removed upon login
  • Kafka
    • No side effects

ConsoleUser

  • API key(s): AdminToken
  • Managed with: UI, CLI, API, TF
  • Labels support: Missing
Creates a user with Console permissions.
---
apiVersion: iam/v2
kind: User
metadata:
  name: john.doe@company.org
spec:
  firstName: "John"
  lastName: "Doe"
  permissions:
    - resourceType: PLATFORM
      permissions:
        - taasView
        - datamaskingView
    - resourceType: TOPIC
      cluster: shadow-it
      patternType: PREFIXED
      name: toto-
      permissions:
        - topicViewConfig
        - topicConsume
        - topicProduce
Make sure you set permissions for this user, otherwise it won’t have access to Console functionality (such as Application Catalog or Kafka resources.
User checks:
  • spec.permissions are valid permissions as defined in Permissions
Side effects in Console and Kafka:
  • Console
    • User is given the associated permissions in the UI over the resources
  • Kafka
    • No side effects

KafkaCluster

Creates a Kafka cluster definition in Console.
  • API key(s): AdminToken
  • Managed with: UI, CLI, API, TF
  • Labels support: Partial
---
apiVersion: console/v2
kind: KafkaCluster
metadata:
  name: my-dev-cluster
spec:
  displayName: "My Dev Cluster"
  icon: "kafka"
  color: "#000000"
  bootstrapServers: "localhost:9092"
  ignoreUntrustedCertificate: false
  properties:
    sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
    security.protocol: SASL_SSL
    sasl.mechanism: PLAIN
  schemaRegistry:
    type: "ConfluentLike"
    url: http://localhost:8080
    security:
      type: BasicAuth
      username: some_user
      password: some_password
    ignoreUntrustedCertificate: false
  kafkaFlavor:
    type: "Confluent"
    key: "string"
    secret: "string"
    confluentEnvironmentId: "string"
    confluentClusterId: "string"
metadata.name, spec.displayName, spec.icon and spec.color are combined to create the visual identity of the KafkaCluster within Console.
KafkaCluster checks:
  • spec.icon (optional, default kafka) is a valid entry from our Icon Sets
  • spec.color (optional, default #000000) is a HEX color for spec.icon
  • spec.ignoreUntrustedCertificate (optional, default false) must be one of [true, false]
  • spec.schemaRegistry.type (optional) must be one of [ConfluentLike, Glue]
  • spec.kafkaFlavor.type (optional) must be one of [Confluent, Aiven, Gateway]
Conduktor CLI does not verify that your Kafka configuration (spec.bootstrapServers, spec.properties, etc.) is valid. You need to check that in Console directly.

Schema registry

This section lets you associate a schema registry to your KafkaCluster.

Confluent or Confluent-like Registry

spec:
  schemaRegistry:
    type: "ConfluentLike"
    url: http://localhost:8080
    ignoreUntrustedCertificate: false
    security:
      type: BasicAuth
      username: some_user
      password: some_password
Confluent schema registry checks:
  • spec.schemaRegistry.urls must be a single URL of a Kafka Connect cluster
    • Multiple URLs are not supported for now. Coming soon
  • spec.schemaRegistry.ignoreUntrustedCertificate (optional, default false) must be one of [true, false]
  • spec.schemaRegistry.properties (optional) is Java Properties formatted key values to further configure the SchemaRegistry
  • spec.security.type (optional) must be one of [BasicAuth, BearerToken, SSLAuth]

AWS Glue registry

spec:
  schemaRegistry:
    type: "Glue"
    region: eu-west-1
    registryName: default
    security:
      type: Credentials
      accessKeyId: accessKey
      secretKey: secretKey
AWS Glue registry checks:
  • spec.schemaRegistry.region must be a valid AWS region
  • spec.schemaRegistry.registryName must be a valid AWS Glue Registry in this region
  • spec.schemaRegistry.security.type must be one of [Credentials, FromContext, FromRole]
Credentials
Use AWS API Key/Secret to connect to the Glue registry.
    security:
      type: Credentials
      accessKeyId: AKIAIOSFODNN7EXAMPLE
      secretKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
FromContext
    security:
      type: FromContext
      profile: default
FromRole
    security:
      type: FromRole
      role: arn:aws:iam::123456789012:role/example-role

Kafka provider

This section lets you configure the Kafka provider for this KafkaCluster. Confluent Cloud Provide your Confluent Cloud details to get additional features in Console:
  • Confluent Cloud service account support
  • Confluent Cloud API key support
spec:
  kafkaFlavor:
    type: "Confluent"
    key: "yourApiKey123456"
    secret: "yourApiSecret123456"
    confluentEnvironmentId: "env-12345"
    confluentClusterId: "lkc-67890"
Aiven Provide your Aiven Cloud details to get additional features in Console:
  • Aiven service accounts support
  • Aiven ACLs support
spec:
  kafkaFlavor:
    type: "Aiven"
    apiToken: "a1b2c3d4e5f6g7h8i9j0"
    project: "my-kafka-project"
    serviceName: "my-kafka-service"
Gateway Provide your Gateway details to get additional features in Console:
  • Interceptor support
spec:
  kafkaFlavor:
    type: "Gateway"
    url: "http://gateway:8888"
    user: "admin"
    password: "admin"
    virtualCluster: passthrough

Icon sets

cloudBoltcloudRainbowcloudsnowflake
pooStormpoopboltumbrella
tennisBallrugbyBalltrafficConefaucet
basketShoppingboxscaleBalancedsunglasses
swordaxeBattlevialfeatherPointed
bombflagheartkey
fireExtinguisherfireFlameCurvedalienhelmetBattle
ghostrobotdogelephant
birdcrabcatSpaceplanetRinged
meteormoonspaceStationrocketLaunch
paperPlanecarSidebuildingColumnscastle
acornburgerLettucecroissantmug
cactusclovercameraCctvcalendar
alarmClockcompassgamepadModernserver
shieldBlankcomputerClassicdharmachakrakafka

KafkaConnectCluster

Creates a Kafka Connect cluster definition in Console.
  • API key(s): AdminToken
  • Managed with: API, CLI, UI, TF
  • Labels support: Partial
---
apiVersion: console/v2
kind: KafkaConnectCluster
metadata:
  cluster: my-dev-cluster
  name: connect-1
spec:
  displayName: "Connect 1"
  urls: "http://localhost:8083"
  headers:
    X-PROJECT-HEADER: value
    AnotherHeader: test
  ignoreUntrustedCertificate: false
  security:
    type: "BasicAuth"
    username: "toto"
    password: "my-secret"
KafkaConnectCluster checks:
  • metadata.cluster has to be a valid KafkaCluster name.
  • spec.urls has to be a single URL of a Kafka Connect cluster. Multiple URLs are not currently supported.
  • spec.ignoreUntrustedCertificate (optional, default false). Has to be true or false.
  • spec.headers (optional) has to be key-value pairs of HTTP headers.
  • spec.security.type (optional) has to be BasicAuth, BearerToken or SSLAuth. Find out more.

KsqlDBCluster

  • API key(s): AdminToken
  • Managed with: UI, CLI, API
  • Labels support: Missing
Creates a ksqlDB cluster definition in Console.
---
apiVersion: console/v2
kind: KsqlDBCluster
metadata:
  cluster: my-dev-cluster
  name: ksql-1
spec:
  displayName: "KSQL 1"
  url: "http://localhost:8088"
  ignoreUntrustedCertificate: false
  security:
    type: "BasicAuth"
    username: "toto"
    password: "my-secret"
KafkaConnectCluster checks:
  • metadata.cluster has to be a valid KafkaCluster name.
  • spec.url has to be a single URL of a KsqlDB cluster.
  • spec.ignoreUntrustedCertificate (optional), default is false. Has to be true or false.
  • spec.headers (optional) has to be key-value pairs of HTTP headers.
  • spec.security.type (optional) has to be BasicAuth, BearerToken or SSLAuth. Find out more.

Alerts

  • API key(s): AdminToken, AppToken
  • Managed with: UI, CLI, API
  • Labels support: Missing
Creates an alert in Console.
---
apiVersion: console/v3
kind: Alert
metadata:
  name: messages-in-dead-letter-queue
  group: support-team # will be the owner of the alert, can be either a user, a group or an appInstance
  # user: user@company.org
  # appInstance: my-app-instance
spec:
  cluster: my-dev-cluster
  type: TopicAlert
  topicName: wikipedia-parsed-DLQ
  metric: MessageCount
  operator: GreaterThan
  threshold: 0
  destination:
    type: Slack
    channel: "alerts-p1"
Alert checks:
  • metadata.user|metadata.group|metadata.appInstance has to be a valid user, group or appInstance.
  • metadata.destination.type can be either Slack, Teams or Webhook. When set to:
    • Slack: has to be a valid Slack channel ID
    • Teams: has to be a valid Teams webhook URL
    • Webhook:
      • spec.destination.url has to be a valid URL
      • spec.destination.method has to be GET, POST, PUT or DELETE
      • spec.destination.headers (optional) has to be key-value pairs of HTTP headers
      • spec.destination.authentication.type (optional) has to be BasicAuth (define spec.destination.authentication.username and spec.destination.authentication.password) or BearerToken (define spec.destination.authentication.token).
  • spec.cluster has to be a valid KafkaCluster name.
  • spec.type has to be BrokerAlert,TopicAlert, KafkaConnectAlert or ConsumerGroupAlert. When set to:
    • BrokerAlert: spec.metric has to be MessageIn, MessageOut, MessageSize, OfflinePartitionCount, PartitionCount, UnderMinIsrPartitionCount or UnderReplicatedPartitionCount.
    • TopicAlert: spec.metric has to be MessageCount, MessageIn, MessageOut or MessageSize and the spec.topicName has to be a Kafka topic that the owner can access.
    • KafkaConnectAlert:spec.metric has to be FailedTaskCount;spec.connectName has to be a valid KafkaConnect cluster associated to this spec.cluster Kafka cluster and spec.connectorName has to be a Kafka Connect Connector that the owner can access.
    • ConsumerGroupAlert:spec.metric has to be OffsetLag or TimeLag and spec.consumerGroupName has to be a Kafka Consumer group that the owner can access.
  • spec.metric depends on the spec.type.
  • spec.operator has to be GreaterThan, GreaterThanOrEqual, LessThan, LessThanOrEqual or NotEqual.
  • spec.threshold has to be a number.
  • spec.disable (optional), default is false. Has to be true or false.

Partner Zones

  • API key(s): AdminToken
  • Managed with: UI, CLI, API
  • Labels support: Partial
Create or update a Partner Zone.
---
apiVersion: console/v2
kind: PartnerZone
metadata:
  name: external-partner-zone
spec:
  displayName: External Partner Zone
  description: An external partner to exchange data with.
  url: https://partner1.com
  partner:
    name: John Doe
    role: Data analyst
    email: johndoe@partner.io
    phone: 07827 837 177
  cluster: cdk-gateway
  underlyingCluster: cluster1
  authenticationMode:
    serviceAccount: partner-external-partner
    type: PLAIN
  topics:
    - name: topic-a
      backingTopic: kafka-topic-a
      permission: WRITE
    - name: topic-b
      backingTopic: kafka-topic-a
      permission: READ
  trafficControlPolicies:
    maxProduceRate: 1e+06
    maxConsumeRate: 1e+06
    limitCommitOffset: 30
  headers:
    addOnProduce:
        - key: partner-name
          value: external-analytics-partner
          overrideIfExists: false
        - key: client-info
          value: "Client:{{clientId}}, from IP:{{userIp}}"
          overrideIfExists: true
        - key: kafka-api
          value: "Kafka API Key:{{apiKey}}, version {{apiKeyVersion}}"
          overrideIfExists: true
        - key: produce-metadata
          value: "User:{{user}}, via Gateay:{{gatewayHost}}, at timestampMillis:{{timestampMillis}}"
          overrideIfExists: true
    removeOnConsume:
        - keyRegex: my_team_prefix.*
Partner Zone checks:
  • spec.displayName is mandatory.
  • spec.description, spec.url and spec.partner (optional), useful for context information.
  • spec.cluster has to be a valid Console cluster technical ID with the Provider configured as Gateway.
  • spec.underlyingCluster has to be a valid Console cluster technical ID where the cluster has to be defined under the Gateway defined by the spec.gatewayClusterId.
    • When not specified, the value is inferred to be equal to the spec.gatewayClusterId field, hence selecting the main cluster behind Gateway by default.
  • spec.serviceAccount has to be a local Gateway service account. It doesn’t need to exist before creating the Partner Zone, the service account will be created automatically.
  • topics[].name is the name of the topic as it should appear to your external partner. This can be different from backingTopic.
  • topics[].backingTopic is the internal name of the topic that you want to share.
  • topics[].permission has to be set to either READ or WRITE (which includes READ).
  • trafficControlPolicies.maxProduceRate (optional), sets the maximum rate (in bytes/s) at which the partner can produce messages to the topics per Gateway node.
  • trafficControlPolicies.maxConsumeRate (optional), sets the maximum rate (in bytes/s) at which the partner can consume messages from the topics per Gateway node.
  • trafficControlPolicies.limitCommitOffset (optional), sets the maximum number of commit requests (in requests/minute) that the partner can make per Gateway node.
Side effects in Console and Kafka: Once created or updated, the following fields will be made available:
  • metadata.updatedAt (by consecutive get from the CLI/API).
  • metadata.status (by consecutive get from the CLI/API.) Possible values are PENDING, READY or FAILED.
  • metadata.failedReason will be populated in case of FAILED status.
  • The service account will be created if it doesn’t exist and will be granted the permissions as declared in spec.topics.
  • The traffic control policies will be applied to the service account.

HTTP security properties

HTTP security properties are used in KafkaCluster (schema registry), KafkaConnect and KsqlDBCluster.

Basic authentication

  security:
    type: "BasicAuth"
    username: "toto"
    password: "my-secret"

Bearer token

  security:
    type: "BearerToken"
    token: "toto"

mTLS/client certificate

  security:
    type: "SSLAuth"
    key: |
      -----BEGIN PRIVATE KEY-----
      MIIOXzCCDUegAwIBAgIRAPRytMVYJNUgCbhnA+eYumgwDQYJKoZIhvcNAQELBQAw
      ...
      IFyCs+xkcgvHFtBjjel4pnIET0agtbGJbGDEQBNxX+i4MDA=
      -----END PRIVATE KEY-----
    certificateChain: |
      -----BEGIN CERTIFICATE-----
      MIIOXzCCDUegAwIBAgIRAPRytMVYJNUgCbhnA+eYumgwDQYJKoZIhvcNAQELBQAw
      RjELMAkGA1UEBhMCVVMxIjAgBgNVBAoTGUdvb2dsZSBUcnVzdCBTZXJ2aWNlcyBM
      ...
      8/s+YDKveNdoeQoAmGQpUmxhvJ9rbNYj+4jiaujkfxT/6WtFN8N95r+k3W/1K4hs
      IFyCs+xkcgvHFtBjjel4pnIET0agtbGJbGDEQBNxX+i4MDA=
      -----END CERTIFICATE-----

Permissions

Permissions are used in groups and users and lets you configure access to any Kafka resource or Console feature. A permission applies to a certain resourceType which affects the required fields.

Topic permissions

# Grants consume, produce and view config to all topics toto-* on shadow-it cluster
- resourceType: TOPIC
  cluster: shadow-it
  patternType: PREFIXED
  name: toto-
  permissions:
    - topicViewConfig
    - topicConsume
    - topicProduce
  • resourceType: TOPIC
  • cluster is a valid Kafka cluster
  • patternType is either PREFIXED or LITERAL
  • name is the name of the topic or topic prefix to apply the permissions to
  • permissions is a list of valid topic permissions
Available topic permissionsDescription
topicConsumePermission to consume messages from the topic.
topicProducePermission to produce (write) messages to the topic.
topicViewConfigPermission to view the topic configuration.
topicEditConfigPermission to edit the topic configuration.
topicCreatePermission to create a new topic.
topicDeletePermission to delete the topic.
topicAddPartitionPermission to add partitions to the topic.
topicEmptyPermission to empty (delete all messages from) the topic.

Subject permissions

# Grants view and edit compatibility to all subjects starting with sub-* on shadow-it cluster
- resourceType: SUBJECT
  cluster: shadow-it
  patternType: PREFIXED
  name: sub-
  permissions:
    - subjectView
    - subjectEditCompatibility
  • resourceType: SUBJECT
  • cluster is a valid Kafka cluster
  • patternType is either PREFIXED or LITERAL
  • name is the name of the subject or subject prefix to apply the permissions to
  • permissions is a list of valid subject permissions
Available subject permissionsDescription
subjectCreateUpdatePermission to create or update the subject.
subjectDeletePermission to delete the subject.
subjectEditCompatibilityPermission to edit the subject compatibility settings.
subjectViewPermission to view the subject details.

ConsumerGroup permissions

# Grants view and reset on all consumer groups starting with group-* on shadow-it cluster
- resourceType: CONSUMER_GROUP
  cluster: shadow-it
  patternType: PREFIXED
  name: group-
  permissions:
    - consumerGroupView
    - consumerGroupReset
  • resourceType: CONSUMER_GROUP
  • cluster is a valid Kafka cluster
  • patternType is either PREFIXED or LITERAL
  • name is the name of the consumer group or consumer group prefix to apply the permissions to
  • permissions is a list of valid consumer group permissions
Available ConsumerGroup permissionsDescription
consumerGroupCreatePermission to create a new consumer group.
consumerGroupResetPermission to reset the consumer group.
consumerGroupDeletePermission to delete the consumer group.
consumerGroupViewPermission to view the consumer group details.

Cluster permissions

# Grants view and edit broker, edit schema registry compatibility, view ACL and manage ACL on shadow-it cluster
- resourceType: CLUSTER
  name: shadow-it
  permissions:
    - clusterViewBroker
    - clusterEditSRCompatibility
    - clusterEditBroker
    - clusterViewACL
    - clusterManageACL
  • resourceType: CLUSTER
  • name is the name of the cluster to apply the permissions to
    • Use * for all clusters
  • permissions is a list of valid cluster permissions
Available cluster permissionsDescription
clusterViewBrokerPermission to view broker details.
clusterEditSRCompatibilityPermission to edit schema registry compatibility settings.
clusterEditBrokerPermission to edit broker configuration.
clusterViewACLPermission to view (ACLs) for the cluster.
clusterManageACLPermission to manage (ACLs) for the cluster.

KafkaConnect permissions

# Grants create and delete on all connectors starting with connector-* on shadow-it cluster and kafka-connect-cluster
- resourceType: KAFKA_CONNECT
  cluster: shadow-it
  kafkaConnect: kafka-connect-cluster
  patternType: PREFIXED
  name: connector-
  permissions:
    - kafkaConnectorCreate
    - kafkaConnectorDelete
  • resourceType: KAFKA_CONNECT
  • cluster is a valid Kafka cluster
  • kafkaConnect is a valid Kafka Connect cluster
  • patternType is either PREFIXED or LITERAL
  • name is the name of the connector or connector prefix to apply the permissions to
  • permissions is a list of valid Kafka Connect permissions
Available KafkaConnect permissionDescription
kafkaConnectorViewConfigPermission to view the Kafka Connect configuration.
kafkaConnectorStatusPermission to view the status of Kafka Connect connectors.
kafkaConnectorEditConfigPermission to edit the Kafka Connect configuration.
kafkaConnectorDeletePermission to delete connectors.
kafkaConnectorCreatePermission to create new connectors.
kafkaConnectPauseResumePermission to pause and resume connectors.
kafkaConnectRestartPermission to restart connectors.

KsqlDB permissions

# Grants all permissions on KsqlDB cluster ksql-cluster
- resourceType: KSQLDB
  cluster: shadow-it
  ksqlDB: ksql-cluster
  permissions:
    - ksqldbAccess
  • resourceType: KSQLDB
  • cluster is a valid Kafka cluster
  • ksqlDB is a valid Kafka Connect cluster
  • permissions is a list of valid KsqlDB permissions
Available KafkaConnect permissionsDescription
ksqldbAccessGrants all permissions on the KsqlDB cluster.

Console permissions

# Grants Console permissions
- resourceType: PLATFORM
  permissions:
    - userView
    - datamaskingView
  • resourceType: PLATFORM
  • permissions is a list of valid Console permissions
Available Console permissionsDescription
clusterConnectionsManagePermission to add / edit / remove Kafka clusters on Console.
certificateManagePermission to add / edit / remove TLS certificates on Console.
userManagePermission to manage Console users, groups and permissions.
userViewPermission to view Console users, groups and permissions.
datamaskingManagePermission to manage data policies (masking rules).
datamaskingViewPermission to view data policies.
notificationChannelManagePermission to manage integration channels.
auditLogViewPermission to browse audit log.
taasViewPermission to view Application Catalog.
chargebackManagePermission to view Chargeback and manage its settings.
sqlManagePermission to view indexed topics and create SQL queries.