As you describe it, your concept of computational irreducibility is a cousin of chaos theory and fractals. In my view, they all contribute to the modern paradigm shift away from hard math and toward programs.
The programs that are replacing hard math in dealing with difficult phenomena may well, and do in many cases, produce output that one can’t distinguish from the real behavior of phenomena. Climate models for instance produce videos of earth’s weather patterns over long time periods that are indistinguishable from satellite videos in their character. What they can’t do is replicate actual behavior; they can’t go from actual conditions today to accurate predictions a century from now.
Here’s the problem, as I frame it. The actual conditions of a complex system like the climate and the weather consists of an almost statistically infinite amount of information relating to the physical descriptions of all the components of the system; for the climate and weather, that would be the materials of the atmosphere, oceans and land mass. But programs, or computer models, or Rule 30 algorithms, or anything else we might employ, contains almost no starting information, relatively speaking. One can’t go from scant information to a specific collection of statistically infinite information. Your Rule 30, for instance, can go from scant initial information to statistically infinite information, but not information which would be specifically true of a specific real world phenomenon at some future point. You can’t generate specific, accurate, real information from nothing; you can merely generate typical information.
Simply put, the new paradigm can mimic but not predict. For instance, many years ago a very simple rule was used to mimic the flight of a flock of thousands of blackbirds. The output was indistinguishable from how a real flock behaves on a fall day. But that program would be absolutely useless to predict where a real flock of blackbirds would go next, let alone where it would be an hour from now.
At another point, regarding Artificial Intelligence, you write “…in the end the real challenge is to find a way to describe goals.” That is a challenge, but the real challenge is to establish boundaries that AI must stay within. For humans, those boundaries are our values, our ethics, our conscience, our laws and culture and history and innate humanity. For a machine, none of those boundaries exist except those we impose. Without them, AI will be psychopathic.