Hi Jono, good article, but it would also be fair to point out that these limitations are well explained in the AWS documentation: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BestPractices.html
Also, comparing BigTable to DynamoDB can be a little of an “apple to oranges” case. It’s true that BigTable may offer more throughput for less money, but it really depends on the use case.
If you have 100GB of (properly sharded) data and allocate 1,000 r/s and 100 w/s (which would still give you a sustained throughput of 100 reads and 10 writes per partition, plus bursting), your cost would be $250/month. For moderate loads, that’s more convenient than the corresponding $1,600/month for BigTable. Even for higher throughput and a larger data size, 500GB and 10,000 r/s and 1,000 w/s, with reserved capacity for 1 year, it’s $9,384/year vs more than $19,000 for BigTable with the sustained use discount.
For larger dataset, or higher throughput, BigTable becomes more and more convenient, especially at the petabytes sizes. But then, maybe, there could be other, more efficient, ways to store the data (for instance, employing a smaller table with higher throughput for more frequently accessed data, and another larger one for infrequently accessed data, or also using S3 to store larger documents and having DynamoDb just be the index for them).
With all that said, glad that you solved your problem conveniently just migrating GCP.
