Default message size limits
By default, Kafka has the following message size limits:- Producer: 1MB (
max.request.size
) - Broker: 1MB (
message.max.bytes
) - Topic: Inherits from broker setting (
max.message.bytes
) - Consumer: 1MB (
max.partition.fetch.bytes
)
Configuring Kafka for large messages
To send messages larger than 1MB, you need to configure multiple components:Producer configuration
Broker configuration
Topic configuration
Consumer configuration
Performance implications
Sending large messages in Kafka has several performance implications:Memory usage
- Larger messages consume more memory on brokers, producers, and consumers
- Can lead to increased garbage collection pressure
- May require tuning JVM heap sizes
Network bandwidth
- Large messages consume more network bandwidth
- Can lead to network congestion and timeouts
- May require adjusting network buffer sizes
Disk I/O
- Larger messages result in more disk I/O operations
- Can impact log compaction performance
- May require faster storage systems
Throughput impact
- Large messages generally reduce overall throughput
- Kafka is optimized for high-throughput, small messages
- Consider message batching strategies
Alternative approaches
Instead of sending large messages directly, consider these alternatives:1. External storage pattern
Store large payloads in external systems and send only references:- Keeps Kafka messages small and fast
- Allows for separate scaling of storage and messaging
- Enables efficient caching strategies
2. Message splitting
Break large messages into smaller chunks:- Works within default Kafka limits
- Allows for parallel processing
- Provides better error recovery
3. Compression
Enable compression to reduce message sizes:- Reduces network bandwidth usage
- Decreases storage requirements
- Often improves throughput
Best practices
Recommendations for large messages
- Avoid large messages when possible - Kafka is optimized for small, high-throughput messages
- Use external storage - Store large payloads externally and reference them in Kafka messages
- Enable compression - Always enable compression for large messages
- Monitor memory usage - Ensure adequate heap sizing for all components
- Test thoroughly - Verify performance impact in your specific environment
Configuration checklist
When configuring for large messages, ensure all these settings are aligned:- ✅ Producer
max.request.size
- ✅ Broker
message.max.bytes
- ✅ Topic
max.message.bytes
- ✅ Consumer
max.partition.fetch.bytes
- ✅ Consumer
fetch.max.bytes
- ✅ Broker
replica.fetch.max.bytes
Monitoring considerations
Monitor these metrics when working with large messages:- Memory usage on brokers, producers, and consumers
- Network bandwidth utilization
- Disk I/O patterns and latency
- Garbage collection frequency and duration
- Message throughput and latency
Performance impactLarge messages can significantly impact Kafka performance. Always test in a staging environment that mirrors your production setup before deploying large message configurations.