first of all, this is pretty old news.
Second, the chance we are living in a simulation is not as big as Musk makes it. Imagine a civilization that has the resources to build a simulation of a civilization, than that simulation builds a simulation and so on… The system requirements for the first simulation expand exponentially, therefore only a limited number of recursively simpler simulations can exist with enough complexity to allow self-awareness in the simulated inhabitants.
Second, once an inner simulation discovers proof they are indeed a simulation, there will be entities inside which will try to “break” it. Assuming one succeeds, the same issues will likely occur within the outer simulation that created the inner simulation, leading to an outward recursive system crash. I thereby postulate that simulated self-awareness is unstable.