AI’s Rearview Mirror: How Generative Aesthetics Are Shaping — and Limiting — Our Imagined Futures
By Julian Scaff
As generative AI tools — particularly image generators like Midjourney, DALL·E, and Stable Diffusion — become increasingly mainstream, they are beginning to play a central role in shaping our visual and conceptual understanding of the future. AI-generated imagery is becoming the default language for imagining what’s next, from corporate foresight decks to design school portfolios to social media trend cycles. But what kind of future is this language constructing?
For example, Quantumrun, an excellent platform for exploring future trends, integrates AI-generated images into its futurecasting reports. These visuals, often characterized by a lack of logical coherence, still resonate with our mental models of the future. I often wonder what we should visualize or think about when looking at these homogenized, nonsensical images, and how they play upon our subconscious. I also questioned my own use of generative AI, and if I was stunting my imagination.
The flood of low-quality, AI-generated content that often lacks coherence, depth, or originality is frequently called “AI Slop.” These outputs — from garbled images to bland, repetitive text — are produced at scale with little human oversight or editorial curation. The widespread use of such content contributes to the “enshitification” of the internet: a term describing the gradual degradation of online spaces as platforms prioritize engagement metrics and monetization over user experience and content quality. As AI Slop proliferates, it drowns out thoughtful human-created work, clutters search results, and diminishes the overall value and trustworthiness of the web.
While these tools can generate content with astonishing detail and speed, they also have profound aesthetic and cognitive limitations. Rather than offering radically new visions, generative AI presents a narrow, flattened version of the future — one that is more nostalgic than innovative, more visually seductive than intellectually rigorous. This subtly but significantly reshapes society’s thinking about what’s possible.
Visualizing the Future Through a Rearview Mirror
Generative AI models are trained on massive datasets composed primarily of existing human-created content: photographs, digital art, illustrations, films, video games, and more. While these datasets are vast, they are also bounded by what has already been imagined and produced. As such, generative outputs tend to remix historical and cultural tropes rather than invent new visual languages.
This temporal backwardness results in a phenomenon that scholar Sherry Turkle calls “simulation without innovation” — where the new is indistinguishable from pastiche (Turkle, 1995). Cyberpunk cityscapes, humanoid robots, dystopian megastructures, and sterile techno-utopias are over-represented, echoing 20th-century science fiction far more than 21st-century sociotechnical reality. This effect, sometimes called “retrofuturism,” is not inherently problematic — but when it dominates generative tools, it subtly narrows the range of speculative possibilities we can access.
Marshall McLuhan famously wrote that we drive into the future looking through the rearview mirror. Generative AI may be the most literal embodiment of that metaphor yet.
Aesthetic Homogenization
A distinct “AI aesthetic” has emerged in the past two years, characterized by hyper-detailed rendering, cinematic lighting, surreal color palettes, and dreamlike juxtapositions. While captivating, this style often privileges surface over substance. In these images, buildings may soar in impossible configurations; clothing features intricate textures but lacks seams or closures; vehicles float without visible propulsion.
This aesthetic creates a seductive sense of plausibility. As cultural theorist Jean Baudrillard observed, when images become hyperreal — more real than real — they can supplant critical thought (Baudrillard, 1981). The viewer’s imagination may be captured but not challenged. These slick, speculative visuals do not invite skepticism or interrogation. Instead, they generate what one might call “speculative comfort zones” — spaces that look futuristic enough to signal progress but generic enough not to disturb dominant worldviews.
Erosion of Common Sense and Physical Logic
Beyond style, many generative outputs suffer from structural incoherence. AIs often hallucinate — creating physically implausible objects, defying gravity or anatomy, or inventing concepts that violate basic logic. Examples abound: skyscrapers supported by a single steel beam, ergonomic chairs with no seat, or futuristic apparel with no way to be worn. The LLMs are notoriously bad at rendering human anatomy, with results ranging from the amusing to the grotesque. These errors are typically chalked up to technical limitations, but their cognitive effects may be more lasting.
Repeated exposure to such visual anomalies may subtly recalibrate our expectations of plausibility. When impossible structures or physically illogical designs become normalized through visual saturation, they risk dulling the viewer’s engineering imagination — that is, their internal sense of how things should work. In design disciplines, this shift from feasibility to spectacle could have long-term consequences for teaching and evaluating creativity.
Bias Loop: Past → Present → Future
The training datasets used for generative AI encode stylistic tendencies and cultural and ideological biases. These include Western-centric worldviews, gender and racial stereotypes, ableist assumptions, and a narrow range of economic and technological imaginaries. Because the models are trained on what is most available and popular (often scraped from online repositories), so dominant narratives are amplified while alternative or subversive visions are filtered out.
This leads to a kind of “bias loop” in futures thinking: the past’s biases get embedded into the present’s generative tools, which project them into future visions. As a result, the future looks suspiciously like the neoliberal present — just shinier, more surveilled, and filled with gadgetry. Scholar Ruha Benjamin calls this dynamic the “New Jim Code”: the reproduction of existing social hierarchies through supposedly neutral technological systems (Benjamin, 2019).
Thus, instead of opening up the field of futures to diverse epistemologies and worldviews, generative AI often enacts a kind of flattened futurism — a homogeneous and decontextualized rendering of progress that is sanitized of conflict, history, and humanity.
Rethinking the Role of Designers and Futurists
Given these challenges, designers, artists, educators, and futurists must resist the generative flattening of imagination. Critical design practices like speculative design (Dunne & Raby, 2013) and post-normal design (Auger, 2013) offer valuable methods for questioning dominant narratives and constructing more pluralistic futures.
Additionally, insights from Science and Technology Studies (STS) — a field investigating how science and technology shape and are shaped by society — can help unpack the assumptions encoded in generative systems. Scholars like Sheila Jasanoff and Donna Haraway remind us that all technologies are culturally situated and politically charged, no matter how neutral they may seem.
Public literacy around AI systems is also essential. As visual culture increasingly becomes AI-mediated, people must be equipped with the tools to distinguish between plausible futures and aesthetic hallucinations, critical foresight, and algorithmic kitsch.
Toward a Re:Generative Imagination
The emergence of generative AI represents both a profound opportunity and a subtle danger. On one hand, it can accelerate creative workflows, democratize access to visual storytelling, and inspire new directions in design. On the other hand, it risks narrowing the future to a set of visually pleasing but ideologically stagnant tropes.
To avoid this trap, we need a new kind of engagement — what we might call a re:generative imagination: one that uses AI as a collaborator, not a prophet; one that interrogates as much as it creates; one that treats the future not as a spectacle, but as a contested space of meaning, equity, and possibility.
References:
Baudrillard, J. Simulacra and Simulation. University of Michigan Press. 1994.
Benjamin, R. Race After Technology: Abolitionist Tools for the New Jim Code. Polity. 2019.
Dunne, A., & Raby, F. Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press. 2013.
Haraway, D. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies, 14(3), 575–599. 1988.
Jasanoff, S. (Ed.). States of Knowledge: The Co-production of Science and Social Order. Routledge. 2006.
McLuhan, M. Understanding Media: The Extensions of Man. McGraw-Hill. 1994.
Turkle, S. Life on the Screen: Identity in the Age of the Internet. Simon & Schuster. 1997.