- Default message size limits across Kafka components
- How to configure producers, brokers, and consumers for large messages
- Performance implications of large messages
- Alternative patterns for handling large payloads
Default message size limits
By default, Kafka has the following message size limits:| Component | Configuration | Default |
|---|---|---|
| Producer | max.request.size | 1 MB |
| Broker | message.max.bytes | 1 MB |
| Topic | max.message.bytes | Inherits from broker |
| Consumer | max.partition.fetch.bytes | 1 MB |
Configure Kafka for large messages
To send messages larger than 1MB, you need to configure multiple components:Producer configuration
Broker configuration
Topic configuration
Consumer configuration
Performance implications
Sending large messages in Kafka has several performance implications:| Area | Impact | Mitigation |
|---|---|---|
| Memory | Higher heap usage, GC pressure | Tune JVM heap sizes |
| Network | More bandwidth, potential timeouts | Adjust buffer sizes |
| Disk I/O | More operations, slower compaction | Use faster storage |
| Throughput | Lower overall message rate | Enable compression |
Alternative approaches
Instead of sending large messages directly, consider these alternatives:1. External storage pattern
Store large payloads in external systems and send only references:- Keeps Kafka messages small and fast
- Allows for separate scaling of storage and messaging
- Enables efficient caching strategies
2. Split messages
Break large messages into smaller chunks:- Works within default Kafka limits
- Allows for parallel processing
- Provides better error recovery
3. Compression
Enable compression to reduce message sizes:- Reduces network bandwidth usage
- Decreases storage requirements
- Often improves throughput
Best practices
Configuration checklist
When configuring for large messages, ensure all these settings are aligned:- ✅ Producer
max.request.size - ✅ Broker
message.max.bytes - ✅ Topic
max.message.bytes - ✅ Consumer
max.partition.fetch.bytes - ✅ Consumer
fetch.max.bytes - ✅ Broker
replica.fetch.max.bytes
Monitor large messages
Monitor these metrics when working with large messages:- Memory usage on brokers, producers, and consumers
- Network bandwidth utilization
- Disk I/O patterns and latency
- Garbage collection frequency and duration
- Message throughput and latency
Large message decision guide
Large messages can significantly impact Kafka performance. Always test in a staging environment that mirrors your production setup before deploying large message configurations.See it in practice with ConduktorConduktor Console lets you produce and consume messages while monitoring their sizes. Test your large message configurations and verify all component limits are aligned.
Next steps
- Enable compression to reduce message sizes
- Configure log segments for storage optimization
- Understand batching for throughput optimization