This is cool. I have also been thinking about what deep nets can learn about fractals. This seems applicable since a lot of natural phenomena has statistics which are well modeled by fractals (Mandelbrot 1982).
I read a paper recently where the architecture of the NN was itself a fractal:
Abstract: We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated…arxiv.org
Playing with this is on my todo list, because I have no intuition about this.
I am thinking that one of the limitations in the implementation you showed is the final image resolution limits what you can see in a generated fractal output. What would be cool is to generate Mandelbrot set at various depths, and see if it makes learning easier, since even though each sample will be completely different, but result of same function. If there were a few LSTM layers in the architecture, it may help relate different scales of the M set output images if it was given continuous n values…