Over the past year, I’ve been exploring ideas about systems thinking, human–AI collaboration, and organisational imprinting. As my research has evolved, one question keeps returning: why do organisations resist Generative AI even when the benefits are obvious?
We often assume resistance is about training, costs, skills, or ethics. But the more I read, the more I see that something deeper is happening — something psychological, emotional, and historical. Gen AI doesn’t just land in organisations as a neutral tool; it enters complex human systems shaped by past experiences, internal conflicts, and long-standing routines.
One useful way to understand this is through Freud’s structural model. Even though it comes from psychology, it offers a surprisingly powerful lens for workplace behaviour. At the deepest level, the id craves familiarity, routine, and the comfort of what is known. Gen AI disrupts this comfort, introducing speed, uncertainty, and new ways of working. At the same time, the superego — the internalised voice of organisational expectations — pushes employees to innovate, be productive, and keep up with fast-moving technology. Caught in the middle is the ego, trying to stay balanced while managing the anxiety that technological change inevitably brings.
When the ego feels overwhelmed, it activates defence mechanisms. These aren’t conscious choices; they are automatic responses that protect people from psychological discomfort. In organisations, these defences often appear as rational‑sounding arguments: “We need more evidence before we adopt this,” “Our users aren’t ready,” or “Let’s wait for the next update.” Beneath these explanations is a genuine fear — the fear of losing competence, identity, status, or control. It’s resistance, but it’s resistance wearing a suit and tie.
This emotional layer is reinforced by another dynamic: cognitive generalisation. When employees have had negative experiences with earlier technologies — difficult systems, failed digital transformations, stressful transitions — the mind automatically transfers those feelings to Gen AI. Even if the new tool is completely different, it still “feels” similar enough to trigger the same avoidance. That emotional carry‑over deepens technostress and makes the new technology seem more threatening than it actually is.
Then there is the organisational layer. Through imprinting, organisations internalise patterns and assumptions formed during their early years — habits about decision‑making, leadership styles, communication routines, and beliefs about technology. These imprints persist long after the original conditions have changed. When Gen AI arrives, it doesn’t simply meet employees; it meets a history. Some organisations have histories of innovation, experimentation, and psychological safety. Others have histories of caution, compliance, and risk‑avoidance. GenAI interacts with these imprints, and the organisation reacts accordingly.
When you put these layers together — unconscious conflict, generalised fear, and legacy routines — resistance begins to make sense. What looks like rational hesitation is often something much deeper. And this deeper explanation matters, because leaders typically try to solve resistance with training, policies, or incentives. But if the barrier is emotional and historical, then traditional change strategies miss the real issue.
What we need instead are approaches that speak to the human experience of AI. Strategies that reduce anxiety, create space for learning, and acknowledge the psychological impact of major technological shifts. Leaders can design low‑stakes opportunities for teams to experiment with Gen AI, use storytelling to rewrite negative tech narratives, and audit which organisational routines genuinely add safety and which merely protect comfort. Most importantly, they can recognise that resistance isn’t irrational — it’s meaningful. It tells us something about what people value, fear, and hope for in the workplace.
GenAI is not just a technological evolution; it is a psychological event. The sooner we understand this, the better we can guide organisations towards responsible, human‑centred adoption. For me, this integration of psychoanalysis, generalisation, imprinting, and technostress marks the next step in my research journey — and I look forward to developing it further as I continue to write, reflect, and learn.
Leave a comment