For large datasets I prefer keyset (cursor) pagination over offset because offsets become slow and inconsistent as row counts grow. I’d expose a page size (capped at 100) and a stable cursor that encodes the sort key (for example created_at + id). The SQL looks like: SELECT ... FROM items WHERE (created_at, id) > (:created_at, :id) ORDER BY created_at, id LIMIT :limit. I’d add a composite index on (created_at, id) to keep p95 DB latency under 150ms. First pages get cached in Redis with a short TTL (30s) and we return ETag/Last-Modified for conditional requests. I’d return 400 for invalid params, 422 if page size exceeds cap, and 404/410 for expired cursors. Rate limiting via token-bucket at the gateway (e.g., 500 requests/min per API key) with 429 + Retry-After. In a recent rollout my team of 4 cut DB CPU by ~35% and improved throughput 2.5x in two weeks by switching to this approach.
Get AI-powered feedback on your answer and improve your skills
Takes 5-10 minutes