Regarding VR, essentially it has all the characteristics of video games. So I will talk about how they work, particularly online games.
There is a player or players, human, interacting with objects or aspects of a virtual world and representations of other players, via some human — computer interface. Controllers, joysticks, motion detectors, for example xbox kinect or the occulus rift headgear or similar. The virtual world is generated by a computer or cluster of computers, perhaps remotely, perhaps locally, but frequently in some shared/distributed manner. For example, many massively online type games will use powerful servers to coordinate player actions with other player actions, arbitrating games, deciding who shot whom first, setting up random shared game conditions like the weather for example, that sort of thing is low bandwidth, CPU based math processing. Fairly low power, but still significant. Those kinds of calcultions can be reduced to models of computation that have no graphics component, and don't require large amounts of data transfer, and therefore less energy expenditure. Players’ client devices, e.g. game console or PC is just sending current player coordinates and actions to the server, and receiving the rest of the players coordinates and actions in return. That interaction can be transmitted between the player and the computer generating the world as you said, via current wireless standards, and you are correct, the data transmission is relatively low power. Current games can get away with that trick because they will do the power — hungry, heavy graphics processing locally on the player’s own computer or xbox/playstation/nintendo console. The 3D graphics are very power intensive, lighting, shadows, detail, it all has to be generated in real — time without noticeable lagging. As opposed to, for example, 3D movies by Pixar which have the luxury of being pre — rendered in a single viewing perspective at much higher quality than would be possible in real time. Those animation studios use banks of hundreds of super high end computers that have astronomical power bills. In a modern PC designed for high — resolution graphics, the graphics processing units GPU are usually the most expensive, and the most power hungry. Interestingly these same GPUs are ideal for AI / machine learning, because rendering a scene in realtime 3D is an equivalently “embarrassingly parallel” processing task as machine learning techniques such as deep learning. So VR could use that same distributed processing model and currently does with HTC, Sony and other available tech, however the minimum graphics hardware requirements to support VR are much higher than for standard 3D video games. Therefore very high power usage. If the processing and graphics generation and rendering all happened on the server, then it is just shifting the geography of where the processing and power usage occurs. It also then requires a much higher data transfer rate and more power usage for that, but probably equivalent to 4K video data rates so certainly doable on 5G as stated.
So in short, it's not the wireless using the power, it's generating and rendering the graphics that make up the virtual world. While we shift to low power mobile and tablets to do much of our computing, we shift the processing and therefore power usage into data centers that run the backend of the apps and web applications we use and overall power usage goes up.
I'll respond to the other comments later. I like the thought that has gone into the urban planning, economics and potential solutions you're proposing so would certainly like to read more. I'll send my email.