I just learned about the Normalization of Deviance a few days ago (you’d think a space nerd like me would’ve known about this in the context of the Space Shuttle Challenger disaster), thanks to Johann Rehberger and his take on how it applies to AI.
The original term Normalization of Deviance comes from the American sociologist Diane Vaughan, who describes it as the process in which deviance from correct or proper behavior or rule becomes culturally normalized.
This is something I think about a lot when I look at geopolitics, but I never realized how well it applies to AI and how we SaaS companies fall into the same trap more often than we’d like to admit.
I use the term Normalization of Deviance in AI to describe the gradual and systemic over-reliance on LLM outputs, especially in agentic systems. (…) In the world of AI, we observe companies treating probabilistic, non-deterministic, and sometimes adversarial model outputs as if they were reliable, predictable, and safe.
What worries me is that often we don’t even realize we’re doing it. Either we’re rushing to deliver value, or we’re just learning as we go because this is still an emergent field.
Such a drift does not happen through a single reckless decision. It happens through a series of “temporary” shortcuts that quietly become the new baseline. Because systems continue to work, teams stop questioning the shortcuts, and the deviation becomes invisible and the new norm.
I feel like adding more guardrails or checks gets treated like tech debt and legacy code: nobody wants to do it, it doesn’t have obvious value, and it’s complicated and time-consuming. I’m worried that implementing agentic workflows will only make this worse and amplify the risks. But I’m optimistic that with a bit more discipline and fewer “we’ll fix it later” shortcuts, we can keep the innovation without normalizing the risk.
Source: embracethered.com