AlphaGo Zero does incorporate memory of previous moves. The input to the network includes the 8 most recent board states. This is required in order to avoid reproducing previous board states — which is forbidden by the rule of ko.
Another, more subtle, type of memory is included in the fact that the MCTS search tree is retained from one move to the next. The number of times a future state has been considered is part of the formula used to decide what move to make next. That said, I am not clear on whether the MCTS is used outside of training. I suspect that the neural network by itself would be a formidable opponent.