Preview functionality: Insights is currently a preview feature and is subject to change as we continue working on it.
Overview
The Governance section displays three graphs that measure how well your Kafka infrastructure is governed:- Topics with schema - the percentage and count of topics using Schema Registry for data contracts
- Serialization formats - distribution of serialization formats in use across topics
- Self-service coverage - percentage of topics managed through governed Self-service workflows
Topics with schema
What it shows
The Topics with schema graph displays the percentage of topics that have schemas registered in Schema Registry. The visualization shows:- Percentage of VIP topics with schemas - VIP topics that have registered schemas and enforce data contracts
- Percentage of all topics with schemas - All topics that have registered schemas and enforce data contracts
Why it matters
A Schema Registry enforces data contracts between producers and consumers, improving data quality and reducing operational risk. Schemas validate data at produce time, rejecting invalid messages before they enter topics. Compatibility modes (backward, forward, full) enable safe schema evolution without breaking existing consumers or requiring coordinated deployments.Topics without schemas lack data contract enforcement. Producers can send any format, causing consumer failures and data quality issues.
How to interpret
High (80-100%): strong governance with defined data contracts, controlled evolution and established standards Medium (50-79%): partial adoption with gaps in coverage across teams or topic categories Low (<50%): significant gaps with high risk of breaking changes and data quality issuesPrioritize schema adoption for VIP topics first and test in lower environments before deploying to production.
Serialization formats
What it shows
The Serialization formats graph displays the distribution of serialization formats across topics that have schemas registered in Schema Registry. Supported formats are: Schema Registry-compatible formats:- Avro - Compact binary format with rich schema features
- Protobuf - Protocol Buffers format with cross-language support
- JSON Schema - JSON with schema validation and documentation
This graph only includes topics with schemas. Topics without schemas or using plain JSON/String formats are not represented in this visualization.
Why it matters
Using multiple serialization formats across your organization increases complexity and operational overhead. Each format requires different serializers, deserializers, tooling and expertise. Standardizing on a single format (or at most two formats for specific use cases) reduces the learning curve for developers, simplifies troubleshooting and makes it easier to establish consistent data governance practices.How to interpret
Standardized - One dominant format represents 90%+ of schema-registered topics, indicating strong format consistency and simplified operations Mixed - Multiple formats each represent significant percentages, suggesting inconsistent practices across teams or ongoing migration effortsHaving multiple formats isn’t necessarily wrong, but it increases operational complexity.Consider standardizing on one primary format unless there are strong technical reasons to support multiple formats.
Self-service coverage
Self-service is available with Conduktor Scale Plus only.
What it shows
The Self-service coverage graph displays the percentage of topics managed through governed self-service workflows. Two key metrics are shown:- Overall self-service coverage - Percentage of all topics created and managed via self-service
- VIP topic self-service coverage - Percentage of business-critical topics under self-service governance
Why it matters
Self-service workflows enforce organizational standards at topic creation time through templates that mandate naming conventions, required configurations (replication factor, partition count), schema requirements and ownership labels. This provides an audit trail, enables approval workflows and reduces ad-hoc creation that bypasses governance.How to interpret
High (80-100%) - Strong adoption with most topics created through proper governance channels Medium (50-79%) - Partial adoption, possibly indicating rollout in progress or legacy topics predating self-service Low (<50%) - Limited adoption requiring establishment and promotion of workflowsPrioritize self-service coverage for VIP topics first. Business-critical topics benefit most from governed creation and change management.
Actions to take in Console
Use Console’s Schema Registry, data quality and self-service features to improve governance metrics.Add schemas to topics
Add schemas to topics
1
Register schema in Console
Go to Schema Registry and click New Subject. Provide schema definition (Avro, Protobuf or JSON Schema), strategy, and other required settings.
2
Update producers
Configure Schema Registry URL and use appropriate serializers (KafkaAvroSerializer, KafkaProtobufSerializer or KafkaJsonSchemaSerializer) in producer applications.
3
Verify usage
Check the topic’s Overview tab to confirm schema association.
Data is now validated at produce time. Invalid messages are rejected before entering the topic.
Enforce serialization formats
Enforce serialization formats
This feature requires Conduktor Trust and Conduktor Gateway 3.9 or later.
1
Create validation Rules
Go to Rules under the Trust section and click +New Rule. Choose the appropriate rule type:
- EnforceAvro (built-in) - Ensures messages have a schema ID, the ID exists in Schema Registry, and the schema type is Avro
- JSON schema - Validates JSON messages against a JSON schema definition with required fields and structure
Rules define validation logic but do nothing on their own until attached to a Policy.
2
Create and configure Policy
Go to Policies under the Trust section and click +New Policy. Attach your Rules, select target topics (specific topics or prefixes like
production-*), and assign to a user group with “Manage data quality” permission.Enable Policy actions based on your governance requirements:- Report - Log violations in Policy history for monitoring
- Block - Reject non-compliant messages entirely
- Mark - Add violation header for downstream handling
Block action prevents message delivery. Communicate format requirements and provide migration support before enabling.
3
Monitor enforcement
Track violations in Policy detail pages and support teams adopting required formats through Schema Registry integration.
Implement Self-service workflows
Implement Self-service workflows
Self-service requires Conduktor Scale Plus.
1
Define applications and instances
Create Application resources representing streaming apps or data pipelines, and ApplicationInstance resources linking each application to specific Kafka clusters with service accounts.
2
Establish topic policies
Define TopicPolicy resources that enforce standards for resource creation: replication factors, partition limits, retention settings, schema requirements and naming conventions.Create multiple policies for different environments to balance governance with flexibility.
3
Enable application team autonomy
Application teams use the Conduktor CLI with their application context to create Topics, Subjects, Connectors and ApplicationInstancePermissions following defined policies.Console provides read-only catalog views in Application Catalog and Topic Catalog pages for discoverability.
Teams can now manage their own Kafka resources within governance boundaries and collaborate through permission grants.
Troubleshooting
How do I migrate topics from plain JSON to Avro?
How do I migrate topics from plain JSON to Avro?
Process:
- Design Avro schema matching JSON structure and register in Schema Registry
- Update producers to use KafkaAvroSerializer with Schema Registry URL
- Update consumers to use KafkaAvroDeserializer with Schema Registry URL
- Dual-format (recommended) - Create new Avro topic, write to both topics temporarily, migrate consumers, then decommission old topic
- Big-bang - Schedule downtime, deploy all updates simultaneously with rollback plan ready
Avro messages cannot be read by plain JSON consumers. Coordinate carefully and test thoroughly in non-production environments first.
What if teams resist adopting self-service workflows?
What if teams resist adopting self-service workflows?
Address common objections:
- Demonstrate self-service is faster than manual ticketing
- Show how templates save time with proven configurations
- Explain governance benefits (reduced errors, better visibility)
- Streamline approval processes to minimize wait times
- Provide excellent documentation and training
- Create templates for common use cases
- Assign self-service champions within teams
- Provide dedicated support during initial adoption
Should 100% of topics have schemas?
Should 100% of topics have schemas?
No, 100% coverage is not necessary. Prioritize schemas for:
- Production topics with business data
- Topics with multiple consumers or teams
- VIP topics with high consumer counts
- Topics requiring data evolution and compatibility
- Configuration or control topics
- Internal framework topics (Kafka Streams, Kafka Connect)
- Development or testing topics
Set realistic targets based on organizational maturity. Focus on production and business-critical topics while allowing pragmatic exceptions.
How do I identify which topics need schemas most urgently?
How do I identify which topics need schemas most urgently?
Highest priority:
- VIP topics without schemas (many consumers, breaking changes affect multiple teams)
- Topics with frequent schema evolution (high risk without compatibility enforcement)
- Topics with multiple producing teams (schema enforces consistency)
- High-throughput topics (debugging costs increase at scale)
- Topics in critical data pipelines (quality issues cascade to business decisions)
- Low-volume internal topics with single consumers
Related resources
- View Insights overview
- Configure and manage topics
- Set up monitoring and alerts
- Monitor and manage brokers
- Set up RBAC
- Learn about Self-service topic management
- Configure and use Schema Registry
- Configure data quality policies to enforce standards
- Set up role-based access control with RBAC
- Give us feedback or request a feature