DEV Community

Kafka Fundamentals: kafka acks

Kafka Acks: A Deep Dive into Delivery Semantics and Operational Excellence

1. Introduction

Imagine a financial trading platform where order acknowledgements are critical. A lost order, even momentarily, can lead to significant financial repercussions. Or consider a CDC (Change Data Capture) pipeline replicating data across multiple data centers for disaster recovery. Data loss or inconsistency is unacceptable. In both scenarios, the reliability guarantees provided by Kafka’s acknowledgement (ack) mechanism are paramount.

Kafka, as the backbone of many real-time data platforms, powers microservices, stream processing applications (Kafka Streams, Flink, Spark Streaming), and distributed transactions. Its ability to deliver messages with varying levels of durability and consistency is fundamental to building robust, scalable, and fault-tolerant systems. This post delves into the intricacies of Kafka acks, moving beyond superficial explanations to provide a production-focused understanding for engineers operating at scale. We’ll cover architecture, configuration, failure modes, performance implications, and operational best practices.

2. What is "kafka acks" in Kafka Systems?

“Kafka acks” define the level of acknowledgement a producer receives from the Kafka brokers after publishing a message. It’s a core configuration parameter controlling the durability and consistency of message delivery. Introduced with Kafka 0.9, the acks configuration (available in both producer and consumer contexts, though with different implications) dictates how many brokers must acknowledge receipt before the producer considers a send successful.

The possible values are:

  • 0: The producer doesn’t wait for any acknowledgement. This offers the highest throughput but the lowest reliability. Messages can be lost if the leader broker goes down before replication.
  • 1: The producer waits for acknowledgement from the leader broker. This provides a good balance between throughput and reliability. However, data loss is still possible if the leader fails before replication to followers.
  • all (or -1): The producer waits for acknowledgement from all in-sync replicas (ISRs). This offers the highest durability, ensuring that the message is replicated to a quorum of brokers before being considered successfully sent.

The acks setting is a producer property, but its behavior is intrinsically linked to broker configuration, specifically the min.insync.replicas setting. min.insync.replicas defines the minimum number of ISRs required for a write to be considered successful. If the number of available ISRs falls below this threshold, the broker will refuse writes, preventing data loss. KIP-494 introduced the concept of dynamic configuration of min.insync.replicas allowing for more flexible control.

3. Real-World Use Cases

  • Financial Transactions: acks=all is essential. Losing a transaction record is unacceptable. Idempotent producers (enabled via enable.idempotence=true) are also crucial to prevent duplicate messages during retries.
  • CDC Replication (Multi-Datacenter): Replicating database changes across geographically distributed data centers requires strong durability. acks=all combined with a carefully configured replication factor (typically 3) ensures data consistency.
  • Log Aggregation: While some log loss might be tolerable, a high level of reliability is still desired. acks=1 provides a reasonable trade-off between throughput and durability.
  • Event Sourcing: Event sourcing relies on an immutable log of events. acks=all is critical to guarantee the integrity of the event stream.
  • Stream Processing Backpressure: When consumers are slower than producers, acks=all can exacerbate backpressure. Monitoring consumer lag and adjusting producer batch sizes or increasing broker resources are necessary.

4. Architecture & Internal Mechanics

graph LR
    A[Producer] --> B{Kafka Broker (Leader)};
    B --> C[Log Segment];
    B --> D{Kafka Broker (Follower 1)};
    B --> E{Kafka Broker (Follower 2)};
    D --> C;
    E --> C;
    subgraph Kafka Cluster
        B
        D
        E
    end
    B -- Ack (acks=1) --> A;
    B -- Ack (acks=all) --> A;
    style C fill:#f9f,stroke:#333,stroke-width:2px
Enter fullscreen mode Exit fullscreen mode

The diagram illustrates the basic data flow. The producer sends a message to the leader broker. The leader appends the message to its log segment. The followers replicate the log segment from the leader.

  • Log Segments: Kafka stores messages in immutable log segments. Each segment has a maximum size and retention period.
  • Controller Quorum: The Kafka controller manages partition leadership and replication. It monitors broker health and initiates leader elections.
  • Replication: Messages are replicated across multiple brokers to provide fault tolerance.
  • ISR (In-Sync Replicas): The ISR is the set of replicas that are currently caught up to the leader. acks=all requires acknowledgement from a quorum of ISRs.
  • KRaft (Kafka Raft): Replacing ZooKeeper, KRaft provides a more scalable and efficient metadata management solution. The controller’s role is now handled by a Raft consensus algorithm.

5. Configuration & Deployment Details

server.properties (Broker Configuration):

min.insync.replicas=2
replica.fetch.max.bytes=1048576 # 1MB

default.replication.factor=3
Enter fullscreen mode Exit fullscreen mode

producer.properties (Producer Configuration):

bootstrap.servers=kafka-broker-1:9092,kafka-broker-2:9092,kafka-broker-3:9092
acks=all
enable.idempotence=true
linger.ms=5
batch.size=16384
compression.type=snappy
Enter fullscreen mode Exit fullscreen mode

consumer.properties (Consumer Configuration):

While acks isn't directly a consumer property, fetch.min.bytes and fetch.max.wait.ms influence how much data the consumer fetches from the broker, impacting overall throughput and latency.

CLI Examples:

  • Set acks on a topic:

    kafka-configs.sh --bootstrap-server kafka-broker-1:9092 --entity-type topics --entity-name my-topic --add-config acks=all
    
  • Verify acks setting:

    kafka-configs.sh --bootstrap-server kafka-broker-1:9092 --entity-type topics --entity-name my-topic --describe
    

6. Failure Modes & Recovery

  • Broker Failure: If the leader broker fails with acks=1, messages not yet replicated to followers are lost. With acks=all, the controller elects a new leader from the ISR, ensuring data availability.
  • ISR Shrinkage: If the number of ISRs falls below min.insync.replicas, the broker stops accepting writes. This prevents data loss but can impact availability.
  • Rebalances: Consumer rebalances can lead to temporary message loss if consumers haven’t committed their offsets.
  • Message Loss: Idempotent producers and transactional guarantees (using Kafka Transactions) are essential for preventing duplicate or lost messages in failure scenarios.
  • Recovery Strategies:
    • Idempotent Producers: Ensure exactly-once semantics by assigning a producer ID and sequence number to each message.
    • Kafka Transactions: Provide atomic writes across multiple partitions.
    • Offset Tracking: Consumers must reliably commit their offsets to avoid reprocessing or skipping messages.
    • Dead Letter Queues (DLQs): Route failed messages to a DLQ for investigation and potential reprocessing.

7. Performance Tuning

Configuration Impact Recommendation
linger.ms Latency vs. Throughput Increase for higher throughput, decrease for lower latency.
batch.size Throughput Increase to maximize throughput, but consider memory usage.
compression.type Throughput & Storage Use snappy for a good balance, gzip for higher compression but more CPU usage.
fetch.min.bytes Latency & Throughput Increase to reduce network overhead, decrease for lower latency.
replica.fetch.max.bytes Replication Speed Increase if replication is lagging.

Benchmark: A typical Kafka cluster with acks=all and optimized configurations can achieve sustained throughput of 500MB/s - 1GB/s, depending on hardware and network conditions. Latency typically ranges from 1-10ms.

8. Observability & Monitoring

  • Prometheus & JMX: Monitor Kafka JMX metrics using Prometheus.
  • Grafana Dashboards: Visualize key metrics like:
    • Consumer Lag: Indicates how far behind consumers are from the latest messages.
    • Replication In-Sync Count: Shows the number of ISRs for each partition.
    • Request/Response Time: Measures the latency of producer and consumer requests.
    • Queue Length: Indicates the number of messages waiting to be processed.
  • Alerting:
    • Alert if consumer lag exceeds a threshold.
    • Alert if the replication in-sync count falls below min.insync.replicas.
    • Alert on high request/response times.

9. Security and Access Control

  • SASL/SSL: Encrypt communication between producers, consumers, and brokers.
  • SCRAM: Use SCRAM authentication for secure access.
  • ACLs: Control access to topics and consumer groups using ACLs.
  • Kerberos: Integrate Kafka with Kerberos for centralized authentication.
  • Audit Logging: Enable audit logging to track access and modifications to Kafka resources.

10. Testing & CI/CD Integration

  • Testcontainers: Use Testcontainers to spin up ephemeral Kafka clusters for integration testing.
  • Embedded Kafka: Run Kafka within your test suite for faster and more isolated testing.
  • Consumer Mock Frameworks: Mock consumer behavior to test producer functionality.
  • CI Pipeline:
    • Schema compatibility checks.
    • Contract testing to ensure data contracts are maintained.
    • Throughput tests to verify performance.

11. Common Pitfalls & Misconceptions

  • Misunderstanding min.insync.replicas: Setting it too low compromises durability.
  • Ignoring ISR Shrinkage: Failing to address ISR shrinkage can lead to write failures.
  • Overlooking Producer Retries: Insufficient retry configuration can lead to message loss.
  • Incorrect Offset Management: Improper offset tracking can cause reprocessing or skipping messages.
  • Assuming acks=all guarantees exactly-once: Idempotent producers or transactions are still required.

Logging Sample (Broker):

[2023-10-27 10:00:00,000] WARN [Controller id=1] Controller 1 at kafka-broker-1:9092 failed to allocate leader for partition my-topic-0 because no in-sync replicas are available.  (min.insync.replicas=2)
Enter fullscreen mode Exit fullscreen mode

12. Enterprise Patterns & Best Practices

  • Shared vs. Dedicated Topics: Use dedicated topics for critical data streams to isolate failure domains.
  • Multi-Tenant Cluster Design: Implement resource quotas and access control to manage multi-tenancy.
  • Retention vs. Compaction: Choose appropriate retention policies based on data usage patterns.
  • Schema Evolution: Use a Schema Registry (e.g., Confluent Schema Registry) to manage schema changes.
  • Streaming Microservice Boundaries: Design microservices around bounded contexts and use Kafka to facilitate asynchronous communication.

13. Conclusion

Kafka acks are a cornerstone of building reliable, scalable, and fault-tolerant real-time data platforms. Understanding the nuances of this configuration parameter, its interplay with other Kafka settings, and its impact on performance is crucial for any engineer operating Kafka at scale. Investing in observability, building internal tooling for monitoring and alerting, and proactively addressing potential failure modes will ensure the long-term health and stability of your Kafka-based systems. Next steps should include implementing robust monitoring dashboards, automating failure injection testing, and continuously refining your Kafka configuration based on observed performance and operational data.

Top comments (0)