Skip to main content
Learn how Kafka partitioners affect message distribution in 15 minutes Kafka producer partitioners determine which partition receives each message, affecting load distribution, ordering guarantees, and overall system performance. Understanding partitioning strategies helps you optimize for your specific use case. What you’ll learn:
  • How partitioners route messages to partitions
  • The difference between sticky and round-robin partitioning
  • When to use key-based vs keyless partitioning
  • How to implement custom partitioners

How partitioning works

When a producer sends a message, the partitioner decides which partition receives it based on:
  1. Message key: If present, used to determine partition assignment
  2. Partition specification: Explicit partition number in ProducerRecord
  3. Partitioner logic: Default or custom partitioning algorithm
Kafka Producer Partitioning
Partition assignment impactPartitioning affects message ordering, consumer distribution, load balancing, and data locality. Choose your partitioning strategy carefully based on your use case requirements.

Default partitioner behavior

With message keys

When messages have keys, the default partitioner uses a hash-based approach:
partition = hash(key) % number_of_partitions
Hot keysIf one key appears far more often than others (e.g., a single tenant generating most traffic), all those messages land on the same partition. This creates a “hot partition” that overwhelms one broker and one consumer. Monitor per-partition message rates and consider key redesign or a custom partitioner if distribution is uneven.
Characteristics:
  • Same key always goes to same partition (within same topic configuration)
  • Provides ordering guarantees per key
  • Distributes load based on key distribution
  • Changes when partition count changes

Without message keys

For messages without keys, behavior depends on Kafka version:
Kafka versionPartitionerBehavior
< 2.4Round-robinCycles through partitions sequentially
>= 2.4StickySticks to one partition until batch is full

Sticky partitioner

The sticky partitioner, introduced in Kafka 2.4, improves performance for messages without keys.

How sticky partitioner works

  1. Partition selection: Choose random available partition
  2. Batch filling: Send all messages to chosen partition until batch fills
  3. Partition switching: Switch to different partition for next batch
  4. Repeat process: Continue cycle for optimal batching

Benefits of sticky partitioner

BenefitDescription
Improved throughputBetter batch utilization (more messages per batch)
Fewer network requestsReduced overhead from larger batches
Better compressionLarger batches compress more efficiently
Even distributionOver time, messages distribute evenly across partitions

Configuration

Sticky partitioner is enabled by default in Kafka 2.4+:
# Explicitly configure (usually not needed)
partitioner.class=org.apache.kafka.clients.producer.internals.DefaultPartitioner

Round-robin partitioner

The original default partitioner (pre-2.4) that cycles through partitions.
partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner
Round-robin performance impactRound-robin partitioning creates smaller batches and reduces throughput compared to sticky partitioning for keyless messages.

Partitioning strategies decision guide

Custom partitioner implementation

Create custom partitioners for specific business requirements:
public class CustomPartitioner implements Partitioner {

    @Override
    public int partition(String topic, Object key, byte[] keyBytes,
                        Object value, byte[] valueBytes, Cluster cluster) {

        List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
        int partitionCount = partitions.size();

        if (key == null) {
            return ThreadLocalRandom.current().nextInt(partitionCount);
        }

        // Custom key-based partitioning logic
        String keyString = key.toString();

        if (keyString.startsWith("priority-")) {
            return 0; // Route priority messages to partition 0
        }

        // Use hash for other messages
        return Utils.toPositive(Utils.murmur2(keyBytes)) % partitionCount;
    }

    @Override
    public void configure(Map<String, ?> configs) { }

    @Override
    public void close() { }
}
Register your custom partitioner:
partitioner.class=com.example.CustomPartitioner

Partition count considerations

Partition count changesChanging partition count breaks key-to-partition mapping and can disrupt ordering guarantees. Plan partition counts carefully and avoid frequent changes.

Impact of adding partitions

Original: hash(key) % 3 partitions → key "user123" → partition 2
After adding: hash(key) % 6 partitions → key "user123" → partition 5
Consequences:
  • Same key may go to different partition
  • Breaks ordering guarantees temporarily
  • May create temporary hotspots

Best practices

ScenarioRecommendation
Need ordering per entityUse message keys
Maximum throughputUse sticky partitioner (no keys)
Custom routing logicImplement custom partitioner
Even distributionMonitor for hot keys
Always test custom partitioners with realistic data distributions and load patterns. Partition assignment affects performance significantly.
See it in practice with ConduktorConduktor Console displays partition distribution and message counts per partition. Monitor how your partitioning strategy affects load balance across your topic partitions.

Next steps