Skip to main content
Preview functionality: Insights is currently a preview feature and is subject to change as we continue working on it.
Governance is one of the sections in the Insights dashboard. It helps platform teams measure and improve data governance practices across the organization by tracking schema adoption, serialization format usage and Self-service workflow coverage. Strong governance practices reduce operational risk, improve data quality and enable teams to move faster with confidence. Use these metrics to measure progress toward organizational governance goals.

Overview

The Governance section displays three graphs that measure how well your Kafka infrastructure is governed:
  • Topics with schema - the percentage and count of topics using Schema Registry for data contracts
  • Serialization formats - distribution of serialization formats in use across topics
  • Self-service coverage - percentage of topics managed through governed Self-service workflows
Each graph provides visual representation of governance maturity and helps identify gaps where teams need support or where policies need enforcement.

Topics with schema

What it shows

The Topics with schema graph displays the percentage of topics that have schemas registered in Schema Registry. The visualization shows:
  • Percentage of VIP topics with schemas - VIP topics that have registered schemas and enforce data contracts
  • Percentage of all topics with schemas - All topics that have registered schemas and enforce data contracts

Why it matters

A Schema Registry enforces data contracts between producers and consumers, improving data quality and reducing operational risk. Schemas validate data at produce time, rejecting invalid messages before they enter topics. Compatibility modes (backward, forward, full) enable safe schema evolution without breaking existing consumers or requiring coordinated deployments.
Topics without schemas lack data contract enforcement. Producers can send any format, causing consumer failures and data quality issues.

How to interpret

High (80-100%): strong governance with defined data contracts, controlled evolution and established standards Medium (50-79%): partial adoption with gaps in coverage across teams or topic categories Low (<50%): significant gaps with high risk of breaking changes and data quality issues
Prioritize schema adoption for VIP topics first and test in lower environments before deploying to production.

Serialization formats

What it shows

The Serialization formats graph displays the distribution of serialization formats across topics that have schemas registered in Schema Registry. Supported formats are: Schema Registry-compatible formats:
  • Avro - Compact binary format with rich schema features
  • Protobuf - Protocol Buffers format with cross-language support
  • JSON Schema - JSON with schema validation and documentation
The graph shows what percentage of schema-registered topics use each format.
This graph only includes topics with schemas. Topics without schemas or using plain JSON/String formats are not represented in this visualization.

Why it matters

Using multiple serialization formats across your organization increases complexity and operational overhead. Each format requires different serializers, deserializers, tooling and expertise. Standardizing on a single format (or at most two formats for specific use cases) reduces the learning curve for developers, simplifies troubleshooting and makes it easier to establish consistent data governance practices.

How to interpret

Standardized - One dominant format represents 90%+ of schema-registered topics, indicating strong format consistency and simplified operations Mixed - Multiple formats each represent significant percentages, suggesting inconsistent practices across teams or ongoing migration efforts
Having multiple formats isn’t necessarily wrong, but it increases operational complexity.Consider standardizing on one primary format unless there are strong technical reasons to support multiple formats.

Self-service coverage

Self-service is available with Conduktor Scale Plus only.

What it shows

The Self-service coverage graph displays the percentage of topics managed through governed self-service workflows. Two key metrics are shown:
  • Overall self-service coverage - Percentage of all topics created and managed via self-service
  • VIP topic self-service coverage - Percentage of business-critical topics under self-service governance

Why it matters

Self-service workflows enforce organizational standards at topic creation time through templates that mandate naming conventions, required configurations (replication factor, partition count), schema requirements and ownership labels. This provides an audit trail, enables approval workflows and reduces ad-hoc creation that bypasses governance.

How to interpret

High (80-100%) - Strong adoption with most topics created through proper governance channels Medium (50-79%) - Partial adoption, possibly indicating rollout in progress or legacy topics predating self-service Low (<50%) - Limited adoption requiring establishment and promotion of workflows
Prioritize self-service coverage for VIP topics first. Business-critical topics benefit most from governed creation and change management.

Actions to take in Console

Use Console’s Schema Registry, data quality and self-service features to improve governance metrics.
1

Register schema in Console

Go to Schema Registry and click New Subject. Provide schema definition (Avro, Protobuf or JSON Schema), strategy, and other required settings.
2

Update producers

Configure Schema Registry URL and use appropriate serializers (KafkaAvroSerializer, KafkaProtobufSerializer or KafkaJsonSchemaSerializer) in producer applications.
3

Verify usage

Check the topic’s Overview tab to confirm schema association.
Data is now validated at produce time. Invalid messages are rejected before entering the topic.
Alternative: Auto-registration - Producers can automatically register schemas on first produce. Requires Schema Registry URL configuration and create permissions in Schema Registry.Learn more about Schema Registry
This feature requires Conduktor Trust and Conduktor Gateway 3.9 or later.
Conduktor Trust enforces serialization formats using Rules (validation logic) attached to Policies (applied to topics with actions).
1

Create validation Rules

Go to Rules under the Trust section and click +New Rule. Choose the appropriate rule type:
  • EnforceAvro (built-in) - Ensures messages have a schema ID, the ID exists in Schema Registry, and the schema type is Avro
  • JSON schema - Validates JSON messages against a JSON schema definition with required fields and structure
Rules define validation logic but do nothing on their own until attached to a Policy.
2

Create and configure Policy

Go to Policies under the Trust section and click +New Policy. Attach your Rules, select target topics (specific topics or prefixes like production-*), and assign to a user group with “Manage data quality” permission.Enable Policy actions based on your governance requirements:
  • Report - Log violations in Policy history for monitoring
  • Block - Reject non-compliant messages entirely
  • Mark - Add violation header for downstream handling
Block action prevents message delivery. Communicate format requirements and provide migration support before enabling.
3

Monitor enforcement

Track violations in Policy detail pages and support teams adopting required formats through Schema Registry integration.
Learn more about enforcing data quality
Self-service requires Conduktor Scale Plus.
Self-service uses a GitOps approach where platform teams define applications and policies using YAML resources managed through the Conduktor CLI. Application teams then create and manage their own Kafka resources within defined boundaries.
1

Define applications and instances

Create Application resources representing streaming apps or data pipelines, and ApplicationInstance resources linking each application to specific Kafka clusters with service accounts.
2

Establish topic policies

Define TopicPolicy resources that enforce standards for resource creation: replication factors, partition limits, retention settings, schema requirements and naming conventions.Create multiple policies for different environments to balance governance with flexibility.
3

Enable application team autonomy

Application teams use the Conduktor CLI with their application context to create Topics, Subjects, Connectors and ApplicationInstancePermissions following defined policies.Console provides read-only catalog views in Application Catalog and Topic Catalog pages for discoverability.
Teams can now manage their own Kafka resources within governance boundaries and collaborate through permission grants.
Understand self-service conceptsGet started with the self-service tutorialBring existing topics under self-service - Review unmanaged topics in the governance section, contact owners to explain ownership benefits, verify compliance with standards and import topics into application instances using Console’s import functionality.

Troubleshooting

Process:
  1. Design Avro schema matching JSON structure and register in Schema Registry
  2. Update producers to use KafkaAvroSerializer with Schema Registry URL
  3. Update consumers to use KafkaAvroDeserializer with Schema Registry URL
Migration approaches:
  • Dual-format (recommended) - Create new Avro topic, write to both topics temporarily, migrate consumers, then decommission old topic
  • Big-bang - Schedule downtime, deploy all updates simultaneously with rollback plan ready
Avro messages cannot be read by plain JSON consumers. Coordinate carefully and test thoroughly in non-production environments first.
Address common objections:
  • Demonstrate self-service is faster than manual ticketing
  • Show how templates save time with proven configurations
  • Explain governance benefits (reduced errors, better visibility)
  • Streamline approval processes to minimize wait times
Make adoption easier:
  • Provide excellent documentation and training
  • Create templates for common use cases
  • Assign self-service champions within teams
  • Provide dedicated support during initial adoption
No, 100% coverage is not necessary. Prioritize schemas for:
  • Production topics with business data
  • Topics with multiple consumers or teams
  • VIP topics with high consumer counts
  • Topics requiring data evolution and compatibility
Topics that may not need schemas:
  • Configuration or control topics
  • Internal framework topics (Kafka Streams, Kafka Connect)
  • Development or testing topics
Recommended targets: 90-100% of production business data topics, 80%+ overall coverage
Set realistic targets based on organizational maturity. Focus on production and business-critical topics while allowing pragmatic exceptions.
Highest priority:
  • VIP topics without schemas (many consumers, breaking changes affect multiple teams)
  • Topics with frequent schema evolution (high risk without compatibility enforcement)
  • Topics with multiple producing teams (schema enforces consistency)
Medium priority:
  • High-throughput topics (debugging costs increase at scale)
  • Topics in critical data pipelines (quality issues cascade to business decisions)
Lower priority:
  • Low-volume internal topics with single consumers
Approach: Export Governance data, cross-reference with VIP topics section, review recent data quality incidents and create prioritized migration plan.Use Insights to identify topics that are both VIP and lack schemas. These represent the highest-priority governance gaps.