Interesting approach. We are experimenting with something similar but on top of generation approach, where constructor and phenotype networks both contributes into structure and weights of the final network. Final network performs inference, while error is propagated through reinforcement.
The ultimate goal for any AGI system is to learn to learn as its basic skill. So for autonomous behavior, while we are unable to build the network large enough to simulate the brain, we might be able to train the generator to generate the suitable network for observable input (thats it, each time input significantly changes).