This guide will help you install, configure and use your custom Kafka deserializer with Console which will present serialized messages in a human readable way.
Console looks for jars present in folder /opt/conduktor/plugins during startup. There are different ways of making your custom deserializers available in Console.
There are various way to add custom deserializers JAR to Console inside a Kubernetes environment.Her’s a possible implementation which should be amended based on your infrastructure and policies.
InitContainer
PersistentVolumeClaim
You can use a custom InitContainer that will be responsible for downloading a JAR from a trusted source (S3, SFTP, HTTP).
If your custom deserializers have dependencies, they have be embedded within the same JAR file (Fat JAR / Uber JAR).
Just provide your init container configuration to the Console helm chart:
console-values.yaml
platform: initContainers: - name: init-plugins image: curlimages/curl:latest # Using curl image args: [ "-L", "https://github.com/conduktor/my_custom_deserializers/releases/download/2.0.0/my_custom_deserializers_2.13-2.0.0.jar", "-o", "/opt/conduktor/plugins/my_custom_deserializers_2.13-2.0.0.jar" ] securityContext: # avoid permission issues and curl image not having numeral UID error fsGroup: 0 # same default GID as Console runAsNonRoot: true runAsUser: 10001 # same default UID as Console volumeMounts: - name: plugins mountPath: /opt/conduktor/plugins # Mounting to the desired directory extraVolumes: - name: plugins emptyDir: {} extraVolumeMounts: - name: plugins mountPath: /opt/conduktor/plugins # Mount for Console container
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: console-plugins-pvc namespace: console-namespacespec: accessModes: - ReadWriteMany # or ReadWriteOnce but limited to a single node volumeMode: Filesystem resources: requests: storage: 1Gi storageClassName: standard # change for one that support requested accessModes
Then use Console’s helm chart, created PVC as extra volume like:
console-values.yaml
platform: extraVolumes: - name: plugins persistentVolumeClaim: claimName: console-plugins-pvc extraVolumeMounts: - name: plugins mountPath: /opt/conduktor/plugins podSecurityContext: runAsNonRoot: true # default fsGroup: 0 # enable write to the root group to allow our container user to write on plugin volume
When Console pod is up with volume mounted, you can copy your JAR using kubectl command:
AWS Glue uses a proprietary wire format that differs from Confluent’s schema registry protocol. To consume Glue-encoded messages, implement a deserializer that uses the AWS Glue Schema Registry library and configure it with your Glue registry ARN and AWS credentials via the properties field.
Apicurio Registry uses a compatible wire format but requires its own client library. Point the deserializer at your Apicurio endpoint using the apicurio.registry.url property as shown above.
Any format that can be decoded by a Java class implementing org.apache.kafka.common.serialization.Deserializer<T> works with Console. This includes internal binary formats, custom protobuf variants, or legacy serialization schemes.
Deserializer not listed in the UI — Check Console startup logs for Register custom Kafka Deserializer entries. If missing, verify the JAR path and that the class implements Deserializer<T>.ClassNotFoundException at runtime — Your deserializer has dependencies that aren’t bundled. Rebuild as a Fat JAR (Uber JAR) with all transitive dependencies included.Deserialization returns garbled output — The message was produced with a different serializer than the one you configured. Check the magic byte prefix: Confluent Schema Registry messages start with 0x00 followed by a 4-byte schema ID.