Self-service Resources
Self-service Resources
Application
An application represents a streaming app or data pipeline that is responsible for producing, consuming or processing data from Kafka.
In Self-service, it is used as a means to organize and regroup multiple deployments of the same application (dev, prod) or different microservices that belong to the same team under the same umbrella.
API Keys: Admin API Key
Managed with: CLI API
# Application
---
apiVersion: self-service/v1
kind: Application
metadata:
name: "clickstream-app"
spec:
title: "Clickstream App"
description: "FreeForm text, probably multiline markdown"
owner: "groupA" # technical-id of the Conduktor Console Group
Application checks:
spec.owner
is a valid Console Group- Delete MUST fail if there are associated
ApplicationInstance
Side effect in Console & Kafka:
None.
Deploying this object will only create the Application in Console. It can be viewed in the Application Catalog.
Application Instance
Application Instance represent an actual deployment of an application on a Kafka Cluster for a Service Account.
This is the core concept of Self-service as it ties everything together:
- Kafka cluster
- Service Account
- Ownership on resources
- Policies on resources
API Keys: Admin API Key
Managed with: CLI API
---
apiVersion: self-service/v1
kind: ApplicationInstance
metadata:
application: "clickstream-app"
name: "clickstream-dev"
spec:
cluster: "shadow-it"
serviceAccount: "sa-clicko"
topicPolicyRef:
- "generic-dev-topic"
- "clickstream-naming-rule"
resources:
- type: TOPIC
patternType: PREFIXED
name: "click."
- type: CONSUMER_GROUP
patternType: PREFIXED
name: "click."
- type: SUBJECT
patternType: PREFIXED
name: "click."
- type: CONNECTOR
connectCluster: shadow-connect
patternType: PREFIXED
name: "click."
- type: TOPIC
patternType: PREFIXED
ownershipMode: LIMITED # Topics are still maintained by Central Team
name: "legacy-click."
AppInstance checks:
metadata.application
is a valid Applicationspec.cluster
is a valid Console Cluster technical idspec.cluster
is immutable (can't update after creation)spec.serviceAccount
is optional, and if present not already used by other AppInstance for the samespec.cluster
spec.topicPolicyRef
is optional, and if present must be a valid list of TopicPolicyspec.resources[].type
can beTOPIC
,CONSUMER_GROUP
,SUBJECT
orCONNECTOR
spec.resources[].connectCluster
is only mandatory whentype
isCONNECTOR
spec.resources[].connectCluster
is a valid Connect Cluster linked to the Kafka Clusterspec.cluster
spec.resources[].patternType
can bePREFIXED
orLITERAL
spec.resources[].name
must not overlap with any otherApplicationInstance
on the same cluster- ie: If there is already an owner for
click.
this is forbidden:click.orders.
: Resource is a child-resource ofclick
cli
: Resource is a parent-resource ofclick
- ie: If there is already an owner for
spec.resources[].ownershipMode
is optional, defaultALL
. Can beALL
orLIMITED
Side effect in Console & Kafka:
- Console
- Members of the Owner Group can create Application API Keys from the UI
- Resources with
ownershipMode
toALL
:- ApplicationInstance is given all permissions in the UI and the CLI over the owned resources
- Resources with
ownershipMode
toLIMITED
:- ApplicationInstance is restricted the Create/Update/Delete permissions in the UI and the CLI over the owned resources
- Can't use the CLI apply command
- Can't Create/Delete the resource in the UI
- Everything else (restart connector, Browse & Produce from Topic, ...) is still available
- ApplicationInstance is restricted the Create/Update/Delete permissions in the UI and the CLI over the owned resources
- Read More about ownershipMode here
- Kafka
- Service Account is granted the following ACLs over the declared resources depending on the type:
- Topic: READ, WRITE, DESCRIBE_CONFIGS
- ConsumerGroup: READ
- Service Account is granted the following ACLs over the declared resources depending on the type:
Topic Policy
Topic Policies force Application Teams to conform to Topic rules set at their ApplicationInstance level.
Typical use case include:
- Safeguarding from invalid or risky Topic configuration
- Enforcing naming convention
- Enforcing metadata
Topic policies are not applied automatically.
You must explicitly link them to ApplicationInstance with spec.topicPolicyRef
.
API Keys: Admin API Key
Managed with: CLI API
---
apiVersion: self-service/v1
kind: TopicPolicy
metadata:
name: "generic-dev-topic"
spec:
policies:
metadata.labels.data-criticality:
constraint: OneOf
values: ["C0", "C1", "C2"]
spec.configs.retention.ms:
constraint: Range
max: 3600000
min: 60000
spec.replicationFactor:
constraint: OneOf
values: ["3"]
---
apiVersion: self-service/v1
kind: TopicPolicy
metadata:
name: "clickstream-naming-rule"
spec:
policies:
metadata.name:
constraint: Match
pattern: ^click\.(?<event>[a-z0-9-]+)\.(avro|json)$
TopicPolicy checks:
spec.policies
requires YAML paths that are paths to the Topic resource YAML. For example:metadata.name
to create constraints on Topic namemetadata.labels.<key>
to create constraints on Topic label<key>
spec.partitions
to create constraints on Partitions numberspec.replicationFactor
to create constraints on Replication Factorspec.configs.<key>
to create constraints on Topic config<key>
spec.policies.<key>.constraint
can beRange
,OneOf
orMatch
- Read the Policy Constraints section for each constraint's specification
With the two Topic policies declared above, the following Topic resource would succeed validation:
---
apiVersion: kafka/v2
kind: Topic
metadata:
cluster: shadow-it
name: click.event-stream.avro # Checked by Match ^click\.(?<event>[a-z0-9-]+)\.(avro|json)$ on `metadata.name`
labels:
data-criticality: C2 # Checked by OneOf ["C0", "C1", "C2"] on `metadata.labels.data-criticality`
spec:
replicationFactor: 3 # Checked by OneOf ["3"] on `spec.replicationFactor`
partitions: 3
configs:
cleanup.policy: delete
retention.ms: '60000' # Checked by Range(60000, 3600000) on `spec.configs.retention.ms`
Application Instance Permissions
Application Instance Permissions lets teams collaborate with each other.
API Keys: Admin API Key Application API Key
Managed with: CLI API
# Permission granted to other Applications
---
apiVersion: self-service/v1
kind: ApplicationInstancePermission
metadata:
application: "clickstream-app"
appInstance: "clickstream-app-dev"
name: "clickstream-app-dev-to-another"
spec:
resource:
type: TOPIC
name: "click.event-stream.avro"
patternType: LITERAL
permission: READ
grantedTo: "another-appinstance-dev"
Application instance permission checks:
spec
is immutable- Once created, you will only be able to update its metadata. This is to protect you from making a change that could impact an external application
- Remember this resource affects target ApplicationInstance's Kafka service account ACLs
- To edit this resource, delete and recreate it
spec.resource.type
can beTOPIC
spec.resource.patternType
can bePREFIXED
orLITERAL
spec.resource.name
must reference any "sub-resource" ofmetadata.appInstance
- For example, if you are owner of the prefix
click.
, you can grant READ or WRITE access to:- the whole prefix:
click.
- a sub prefix:
click.orders.
- a literal topic name:
click.orders.france
- the whole prefix:
- For example, if you are owner of the prefix
spec.permission
can beREAD
orWRITE
spec.grantedTo
must be anApplicationInstance
on the same Kafka cluster asmetadata.appInstance
Side effect in Console & Kafka:
- Console
- Members of the
grantedTo
ApplicationInstance are given the associated permissions (Read/Write) in the UI over the resources
- Members of the
- Kafka
- Service Account of the
grantedTo
ApplicationInstance is granted the following ACLs over theresource
depending on thespec.permission
:READ
: READ, DESCRIBE_CONFIGSWRITE
: READ, WRITE, DESCRIBE_CONFIGS
- Service Account of the
Application Group
API Keys: Admin API Key Application API Key
Managed with: CLI API
Create Application Group to directly reflect how your Application operates. You can create as many Application Groups as required to restrict or represent the different teams that use Console on your Application, e.g.:
- Support Team with only Read Access in Production
- DevOps Team with extended access across all environments
- Developers with higher permissions in Dev
Example
# Permissions granted to Console users in the Application
---
apiVersion: self-service/v1
kind: ApplicationGroup
metadata:
application: "clickstream-app"
name: "clickstream-support"
spec:
displayName: Support Clickstream
description: |
Members of the Support Group are allowed:
Read access on all the resources
Can restart owned connectors
Can reset offsets
permissions:
- appInstance: clickstream-app-dev
resourceType: TOPIC
patternType: "LITERAL"
name: "*" # All owned & subscribed topics
permissions: ["topicViewConfig", "topicConsume"]
- appInstance: clickstream-app-dev
resourceType: CONSUMER_GROUP
patternType: "LITERAL"
name: "*" # All owned consumer groups
permissions: ["consumerGroupCreate", "consumerGroupReset", "consumerGroupDelete", "consumerGroupView"]
- appInstance: clickstream-app-dev
connectCluster: local-connect
resourceType: CONNECTOR
patternType: "LITERAL"
name: "*" # All owned connectors
permissions: ["kafkaConnectViewConfig", "kafkaConnectStatus", "kafkaConnectRestart"]
members:
- user1@company.org
- user2@company.org
externalGroups:
- GP-COMPANY-CLICKSTREAM-SUPPORT
Application instance permission checks:
spec.permissions[].appInstance
must be an Application Instance associated to this Application (metadata.application
)spec.permissions[].resourceType
can beTOPIC
,SUBJECT
,CONSUMER_GROUP
orCONNECTOR
- When
resourceType
isCONNECTOR
, additional fieldspec.permissions[].connectCluster
is mandatory. Must be a valid KafkaConnectCluster name
- When
spec.permissions[].patternType
can bePREFIXED
orLITERAL
spec.permissions[].name
must reference any "sub-resource" ofmetadata.appInstance
or any subscribed Topic- Use
*
to include to all owned & subscribed resources associated to thisappInstance
- Use
spec.permissions[].permissions
are valid permissions as defined in Permissionsspec.members
must be email addresses of members you wish to add to this group.spec.externalGroups
is a list of LDAP or OIDC groups to sync with this Console Groups- Members added this way will not appear in
spec.members
- Members added this way will not appear in
Side effect in Console & Kafka:
- Console
- Members of the ApplicationGroup are given the associated permissions in the UI over the resources
- Members of the LDAP or OIDC groups will be automatically added or removed upon login
- Kafka
- No side effect
Policy Constraints
There are currently 3 available constraints:
Range
validates a range of numbersOneOf
validates against a list of predefined optionsMatch
validates using Regular Expression
Range
Validates the property belongs to a range of numbers (inclusive)
spec.configs.retention.ms:
constraint: "Range"
min: 3600000 # 1 hour in ms
max: 604800000 # 7 days in ms
Validation will succeed with these inputs:
- 3600000 (min)
- 36000000 (between min & max)
- 604800000 (max)
Validation will fail with these inputs:
- 60000 (below min)
- 999999999 (above max)
OneOf
Validates the property is one of the expected values
spec.configs.cleanup.policy:
constraint: OneOf
values: ["delete", "compact"]
Validation will succeed with these inputs:
delete
compact
Validation will fail with these inputs:
delete, compact
(Valid in Kafka but not allowed by policy)deleet
(typo)
Match
Validates the property against a Regular Expression
metadata.name:
constraint: Match
pattern: ^wikipedia\.(?<event>[a-z0-9]+)\.(avro|json)$
Validation will succeed with these inputs:
wikipedia.links.avro
wikipedia.products.json
Validation will fail with these inputs
notwikipedia.products.avro2
:^
and$
prevents anything before and after the patternwikipedia.all-products.avro
:(?<event>[a-z0-9]+)
prevents anything else than lowercase letters and digits
Optional Flag
Constraints can be marked as optional. In this scenario, the constraint will only be validated if the field exists.
Example:
spec.configs.min.insync.replicas:
constraint: ValidString
optional: true
values: ["2"]
This object will pass the validation
---
apiVersion: kafka/v2
kind: Topic
metadata:
cluster: shadow-it
name: click.event-stream.avro
spec:
replicationFactor: 3
partitions: 3
configs:
cleanup.policy: delete
retention.ms: '60000'
This object will fail the validation due to a new incorrect definition of insync.replicas
---
apiVersion: kafka/v2
kind: Topic
metadata:
cluster: shadow-it
name: click.event-stream.avro
spec:
replicationFactor: 3
partitions: 3
configs:
min.insync.replicas: 3
cleanup.policy: delete
retention.ms: '60000'