Nope. As I mentioned above we still use it so I’m fully aware of what it is today. Still conceptually broken.
No it’s not indexes. It’s datastore read costs. It just means that we accessed entities too much (for read) and didn’t memcache enough. The problem is that it’s nearly impossible to see where the memcache didn’t cover what it should have.
And again no. IaaS won’t charge you to infinity. If I have a bug I can actually see the failure in my logs. No I won’t get that theoretical Google scale but that’s pure theory and from my experience detached from reality. E.g. in our case so called scale came at a huge price. Our new servers are masked behind a 20USD cloudflare account. This gives us some of the scale without the cost. Yes dynamic data can always fail but again, we’ll have a log.
We changed the architecture by moving away. The problem isn’t architectural. The problem is conceptual. Our debug environment is VERY different to production since the production DB is app engine. We didn’t migrate to cloud SQL when it was launched which might have solved some things (by throwing us back into SQL?) so effectively the only place where issues could be reproduced was production deployments. That’s conceptually broken.
The problem is that app engine is opaque and it’s pieces can’t be tested in isolation. Normally that wouldn’t be a big deal but when you get charged for things like reading from a database at 4 figures this should be a problem…