Are we going to have one Postgres database with one table for transactions? We will keep writing to the transaction table, about 5 to 7 million transactions per second. Will Postgres handle this?
💡 Model Answer
PostgreSQL is not designed for 5–7 million writes per second on a single table. Its write path is limited by WAL durability, disk I/O, and lock contention. To approach that throughput you would need to combine several strategies: 1) Shard the transaction table across multiple nodes (logical sharding with Citus, or physical partitioning with a key that maps to different servers). 2) Use asynchronous WAL (synchronous_commit=off) and batch inserts to reduce commit overhead. 3) Employ a write‑optimized storage engine or a hybrid system: stream writes to a message queue (Kafka) and batch‑load into Postgres for analytics. 4) Scale hardware: SSDs, large RAM for shared buffers, and high‑speed network. 5) Consider a distributed OLTP database built on Postgres (e.g., CockroachDB) that automatically shards and replicates. Even with aggressive tuning, a single Postgres instance will likely max out at a few hundred thousand TPS; reaching millions requires a distributed architecture or a NoSQL solution designed for write‑heavy workloads.
This answer was generated by AI for study purposes. Use it as a starting point — personalize it with your own experience.
🎤 Get questions like this answered in real-time
Assisting AI listens to your interview, captures questions live, and gives you instant AI-powered answers — invisible to screen sharing.
Get Assisting AI — Starts at ₹500