It all comes down to how fast you’d like your reads to be. If I were Twitter, I’d be writing the data millions of times and NOT ALLOWING EDITS!!! I’d have some system in place to fan the data out over a period of time, but just think about the alternative… millions of joins.
Disk space is cheap compared to joins, so if people are actually reading the data with any frequency, then duplicating it everywhere it will need to be read is the quickest way to create massively scaleable reads. Now you can shard your database across multiple Firebase instances if necessary, and everyone’s reads still work the same.
This sort of problem is not easy. We’re talking about non-trivial architectures here, and they’re just going to take a lot longer to sort out. You may even need to write some scaling tests :)