Amazon S3 performance tips

Omar Faroque
Software System Design
6 min readSep 27, 2019

--

We’ve worked with a large number of customers over the last few years getting some truly massive workloads into and out of Amazon S3. What follows is a little bit of best practice guidance for getting big on S3, including some background on why the tricks work.

First: for smaller workloads (<50 total requests per second), none of the below applies, no matter how many total objects one has! S3 has a bunch of automated agents that work behind the scenes, smoothing out load all over the system, to ensure the myriad diverse workloads all share the resources of S3 fairly and snappily. Even workloads that burst occasionally up over 100 requests per second really don’t need to give us any hints about what’s coming…we are designed to just grow and support these workloads forever. S3 is a true scale-out design in action.

S3 scales to both short-term and long-term workloads far, far greater than this. We have customers continuously performing thousands of requests per second against S3, all day every day. Some of these customers simply ‘guessed’ how our storage and retrieval system works, on their own, or may have come to S3 from another system that partitions namespaces using similar logic. We worked with other customers through our Premium Developer Support offerings to help them design a system that would scale basically indefinitely on S3. Today were going to publish that…

--

--