Member-only story
AI’s dark mirror: the unregulated world of shadow models
While the debate about AI largely focuses on players like Chat GPT, Claude and Gemini, a largely unseen battle is being fought over shadow models.
Shadow models are replicas, mutations or clandestine developments, operating outside the radar of monitoring, without control and no accountability. And although they may end up being much more dangerous than the models we know, they represent an evolution that in many cases is not only inevitable but also potentially very interesting. My guess is that most of what we know as advanced users will end up, in one way or another, training and using their own models.
The term shadow AI is already used in the industry to refer to the unauthorized use of AI tools within organizations: models deployed without IT or corporate governance oversights. Obviously, this comes with risks: data leaks, unnoticed biases, regulatory non-compliance or simply operational chaos. But shadow models go a step further: they are not simply “unauthorized tools,” but complete models built with hidden data, often replicating the capabilities of leading models, without consent or traceability.
One of the most disturbing techniques is model distillation, a strategy that uses the outputs of publically available models to train another, in many cases achieving a partially functional…

