The TL;DR is that it very much depend on the table and the usage pattern. In most cases, if the hotkey issue leads to read throttling, you can easily (to some extend) address this issue using a caching layer with Redis or Memcached through EC2 or ElastiCache (we picked Redis). You can change the logic of your code as follow
key = cache.get(name, "xyz")
If key Is Empty Then // cache miss
key = ... // query dynamodb
cache.set(name, "xyz", key)
The complicated part is the cache invalidation. You can either expire the cache after a certain period of time or try to be smarter about it and invalidate the key from your cache as the record get updated in DynamoDB. If you want a fast way to implement such solution, you can rely on DynamoStream and Lambda. You will find a number of blog post on that topic such as this one which seems fairly complete (I didn’t try it but it seems legit.) It is worth mentioning that redis can do more than strictly caching data. For instance, in your Medium profile, you can see the number of followers and following. The canonical source of truth for this data is DynamoDB but we also cache that and use the incr function of Redis to not simply invalidate the cache every time somehow follow a user but actually increment the count in the cache in parallel to what we store in DynamoDB.
For the hotkey issues affective writes, there isn’t a silver bullet. Whenever we can, we actually try to not write in real time to DynamoDB and opt for using SQS, Kinesis and an offline processing service. This allow us to control the rate at which we write to DynamoDB.
The easier said than done advise I can share is that keeping the table below 10Gib or making sure to pick a good partition key helps a lot. If you google a bit, you will find a number of article on data modeling with DynamoDB. This video seems to cover everything you need to know including the cache invalidation using Lambda technique mentioned above.