Skip to main content
Learn how min.insync.replicas ensures data durability in 12 minutes The min.insync.replicas setting works with producer acknowledgments to control when writes are considered successful. Understanding this configuration is essential for balancing data durability with availability in production Kafka deployments. What you’ll learn:
  • How min.insync.replicas works with producer acks
  • The relationship between replication factor, ISR, and availability
  • How to configure min.insync.replicas at topic and broker level
  • Common configuration patterns for production

How min.insync.replicas works

The min.insync.replicas setting specifies the minimum number of replicas that have to acknowledge a write for it to be considered successful when using acks=all.
The default value of acks has changed with Kafka v3.0:
  • if using Kafka < v3.0, acks=1
  • if using Kafka >= v3.0, acks=all

Producer acks review

acks=0

When acks=0 producers consider messages as “written successfully” the moment the message was sent without waiting for the broker to accept it at all. Illustration of Kafka Producer acks Setting set to 0. This 'fire-and-forget' approach is only useful for scenarios where it is OK to potentially lose messages or data. If the broker goes offline or an exception happens, we won’t know and will lose data. This is useful for data where it’s okay to potentially lose messages, such as metrics collection, and produces the highest throughput setting because the network overhead is minimized.

acks=1

When acks=1 , producers consider messages as “written successfully” when the message was acknowledged by only the leader. Overview of process when the Kafka Producer acks Setting is set to 1. The message receipt is only acknowledged by the leader in the Kafka replication setup. Leader response is requested, but replication is not a guarantee as it happens in the background. If an ack is not received, the producer may retry the request. If the leader broker goes offline unexpectedly but replicas haven’t replicated the data yet, we have a data loss.

acks=all

When acks=all, producers consider messages as “written successfully” when the message is accepted by all in-sync replicas (ISR). Diagram showing process when the Kafka producer acks setting is set to 'all'. The message is acknowledged by all in-sync replicas. The lead replica for a partition checks to see if there are enough in-sync replicas for safely writing the message (controlled by the broker setting min.insync.replicas). The request will be stored in a buffer until the leader observes that the follower replicas replicated the message, at which point a successful acknowledgement is sent back to the client. Themin.insync.replicas can be configured both at the topic and the broker-level. The data is considered committed when it is written to all in-sync replicas - min.insync.replicas. A value of 2 implies that at least 2 brokers that are ISR (including leader) have to respond that they have the data. If you would like to be sure that committed data is written to more than one replica, you need to set the minimum number of in-sync replicas to a higher value. If a topic has three replicas and you set min.insync.replicas to 2, then you can only write to a partition in the topic if at least two out of the three replicas are in-sync. When all three replicas are in-sync, everything proceeds normally. This is also true if one of the replicas becomes unavailable. However, if two out of three replicas are not available, the brokers will no longer accept produce requests. Instead, producers that attempt to send data will receive NotEnoughReplicasException. Diagram showing how Kafka Topic Replication, ISR and Producer acks settings combine to provide reliable message safety even when 2 out of 3 brokers fail.

Durability vs availability trade-offs

For a topic replication factor of 3, topic data durability can withstand 2 brokers loss. As a general rule, for a replication factor of N, you can permanently lose up to N-1 brokers and still recover your data.

Availability matrix

ConfigurationBroker failures toleratedUse case
RF=3, acks=all, min.insync=12Default, low durability
RF=3, acks=all, min.insync=21Recommended for production
RF=3, acks=all, min.insync=30Maximum durability, no fault tolerance

Availability rules

  • Reads: As long as one partition is up and considered an ISR, the topic will be available for reads
  • Writes with acks=0 or acks=1: As long as one partition is up and ISR, writes succeed
  • Writes with acks=all: Must have at least min.insync.replicas ISRs available
Formula: With acks=all, replication.factor=N, and min.insync.replicas=M, you can tolerate N-M brokers going down for topic availability purposes.
Popular Configurationacks=all and min.insync.replicas=2 is the most popular option for data durability and availability and allows you to withstand at most the loss of one Kafka broker

Configure min.insync.replicas at topic level

CLI ExtensionsUse CLI commands with appropriate extensions for your platform, e.g., kafka-configs.bat for windows, kafka-configs.sh for Linux
Before running Kafka CLIs make sure that you have started Kafka successfully. First, create a topic named configured-topic with 3 partitions and a replication factor of 1:
kafka-topics --bootstrap-server localhost:9092 --create --topic configured-topic --partitions 3 --replication-factor 1
Describe the topic to check if there are any configuration override set:
kafka-topics --bootstrap-server localhost:9092 --describe --topic configured-topic
Topic: configured-topic	TopicId: CDU7SBxBQ1mzJGnuH68-cQ	PartitionCount: 3	ReplicationFactor: 1	Configs:
	Topic: configured-topic	Partition: 0	Leader: 2	Replicas: 2	Isr: 2
	Topic: configured-topic	Partition: 1	Leader: 3	Replicas: 3	Isr: 3
	Topic: configured-topic	Partition: 2	Leader: 1	Replicas: 1	Isr: 1
Set the min.insync.replicas value for the topic configured-topic to 2:
kafka-configs --bootstrap-server localhost:9092 --alter --entity-type topics --entity-name configured-topic --add-config min.insync.replicas=2
And describe the topic again:
kafka-topics --bootstrap-server localhost:9092 --describe --topic configured-topic
Topic: configured-topic	TopicId: CDU7SBxBQ1mzJGnuH68-cQ	PartitionCount: 3	ReplicationFactor: 1	Configs: min.insync.replicas=2
	Topic: configured-topic	Partition: 0	Leader: 2	Replicas: 2	Isr: 2
	Topic: configured-topic	Partition: 1	Leader: 3	Replicas: 3	Isr: 3
	Topic: configured-topic	Partition: 2	Leader: 1	Replicas: 1	Isr: 1
Now, you can see there is a topic configuration override set (at the right side of the output) - min.insync.replicas=2. You can delete the configuration override by passing --delete-config in place of the --add-config flag:
kafka-configs --bootstrap-server localhost:9092 --alter --entity-type topics --entity-name configured-topic --delete-config min.insync.replicas

Configure min.insync.replicas at broker level

Through a configuration file change

The default value of this configuration is 1. However, this can be changed at the broker level. Open the broker configuration file config/server.properties and append the following at the end of the file:
min.insync.replicas=2
Unlike using kafka-configs which can change configuration while the broker is running, this, however, requires a broker restart for the configuration change to take effect.

Dynamic broker configuration change using kafka-configs CLI

The kafka-configs CLI can also update broker configuration dynamically without requiring a broker restart:
kafka-configs --bootstrap-server localhost:9092 --alter --entity-type brokers --entity-default --add-config min.insync.replicas=2
Output:
Completed updating default config for brokers in the cluster.
Describe the dynamically updated configurations:
kafka-configs --bootstrap-server localhost:9092 --describe --entity-type brokers --entity-default
Default configs for brokers in the cluster are:
  min.insync.replicas=2 sensitive=false synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:min.insync.replicas=2}
Delete the dynamic configuration:
kafka-configs --bootstrap-server localhost:9092 --alter --entity-type brokers --entity-default  --delete-config min.insync.replicas
See it in practice with ConduktorConduktor Console displays topic configurations including min.insync.replicas and replication status. Monitor ISR counts across partitions to ensure your durability settings are effective.The Insights dashboard automatically identifies topics at risk of data loss based on their replication factor and min.insync.replicas configuration, helping you prioritize remediation.

Next steps