HomeInterview QuestionsIf we store transaction details in Kafka and consu…

If we store transaction details in Kafka and consume them later, even with partitioning, is that enough? How many resources will this consume?

🟡 Medium Conceptual Mid level
1Times asked
May 2026Last seen
May 2026First seen

💡 Model Answer

Kafka is designed for high‑throughput, low‑latency messaging. The resources consumed depend on several factors: the number of partitions, replication factor, message size, and consumer group configuration. Each partition is a log file; more partitions increase parallelism but also require more disk I/O and CPU for producers and consumers. A replication factor of 3 means each message is written to three brokers, increasing network traffic and disk usage. Consumers read sequentially from partitions; a consumer group with one consumer per partition can achieve maximum throughput. CPU usage is modest for simple string messages but rises with compression or serialization overhead. Disk usage is linear with the retention period and message size. For a typical transaction log of 1 MB per message, 10 k messages per second, 3 replicas, and 100 partitions, you’d expect ~30 GB/s of write traffic, ~10 GB/s of read traffic, and several hundred MB of disk per day. Monitoring tools (Kafka Manager, Prometheus) help tune these parameters. In summary, Kafka can handle millions of messages per second with modest resources if partitions and replication are tuned appropriately.

This answer was generated by AI for study purposes. Use it as a starting point — personalize it with your own experience.

🎤 Get questions like this answered in real-time

Assisting AI listens to your interview, captures questions live, and gives you instant AI-powered answers — invisible to screen sharing.

Get Assisting AI — Starts at ₹500