
You’ve probably felt it yourself. A spreadsheet, a model, a chatbot spits out an answer. The numbers say yes, but somewhere in your chest it still feels like no. Do you trust the machine, or the unease you can’t quite explain? That moment of hesitation is no small thing, it captures one of the defining tensions of our time. As the World Economic Forum (2023) put it, “AI will transform the workplace at a scale not seen since the Industrial Revolution.” But as machines accelerate, the question becomes sharper: what stays uniquely human?
The scale of adoption is staggering. McKinsey (2023) reported that “about 60 percent of organizations have already adopted AI in at least one business function.” The promises are clear: speed, efficiency, reduced error. AI can crunch terabytes of data in seconds, generate endless variations of design, or simulate complex financial models with uncanny accuracy. From the gasstation to the supermarket, the businesses you cross paths with on a daily basis, have a 60% chance of having AI adopted in at least once business function.
Yet unease grows. According to an Investopedia survey (Duffy, 2025), “63 percent of employees said they appreciate companies investing in AI, but 70 percent feel uncomfortable with AI as their direct manager.” The fear is not just job loss, it is the erosion of human agency. But there is something so inherent to us, real human beings, that AI cannot replace.
Despite acceleration, there are limits machines cannot transcend. AI can generate speech, but it cannot truly listen. It can mimic empathy, but it does not feel. Human communication relies on subtleties: body language, micro-expressions, tone, silence.
Philosopher Hubert Dreyfus warned decades ago that “human expertise is embodied, situated in culture and context, something no computer can replicate” (Dreyfus, 1992). This aligns with Polanyi’s paradox: “We know more than we can tell” (Polanyi, 1966). Much of human knowing is tacit, difficult to formalize, impossible to reduce to data.
Far from being mystical, intuition is increasingly recognized as a cognitive process. Behavioral economist Laura Huang (2020) even calls it “a secret superpower,” pointing out that intuition is not a hunch born of nothing but the culmination of experience and the brain’s ability to recognize patterns beneath conscious awareness.
This resonates deeply with me. In my own work with entrepreneurs and creatives, I have often witnessed moments where a client knows something before the data arrives to confirm it. A founder who feels that a product launch date isn’t aligned, only to see supply chain delays a week later. An artist who intuitively shifts their brand identity, and finds that the market has already started to crave that exact aesthetic. These are not random strokes of luck. They are signals processed through years of embodied experience, subtle perception, and what cognitive science calls “adaptive unconscious.” I’m looking forward to the logical explanation of what medium’s call their gifts, intuition is just a part of that 6th sense.
Research supports this phenomenon. Psychologist Gary Klein (1998) famously studied firefighters who made split-second, life-saving decisions. They did not stop to calculate probabilities; rather, their brains recognized subtle environmental cues, heat, sound, a shift in air pressure, that told them something was wrong. This is intuition as a trained pattern recognition system: not irrationality, but deeply embodied intelligence.
Nobel laureate Daniel Kahneman also observed that intuition often reflects recognition rather than reasoning. “Expert intuition,” he writes, “is simply recognition” (Kahneman, 2011). This recognition is so rapid that it bypasses the conscious mind. It feels like a gut sense, but it is in fact cognition at high speed.
I have felt this in my own creative process. When writing, I sometimes know the structure of a piece before a single sentence appears. When consulting, I sense a brand’s misalignment before I can articulate why. Later, with reflection, I can point to the evidence, the inconsistent messaging, the clashing color palette, but the knowing came first. Science would describe this as my brain drawing on years of accumulated patterns in business, aesthetics, and psychology. To me, it feels like the compass needle inside finally clicking north.
Importantly, intuition also operates beyond professional expertise, it touches the relational and emotional. Neuropsychological studies show that the brain integrates signals from the body (the “gut feeling” is often linked to the vagus nerve and enteric nervous system) into decision-making. This means our intuition is not only mental but also physical. When I sense a client is holding back, or when I step into a room and instantly feel tension before anyone speaks, it is my nervous system processing non-verbal, embodied cues.
In short: intuition is not irrational. It is an advanced human faculty, built on experience, sensory awareness, and emotional intelligence. AI can mimic reasoning at scale, but it cannot replicate the felt sense of human knowing. And perhaps that is precisely where our edge lies. Because, yes, we need that edge with the speed AI develops.
The real future is not humans versus machines, but humans with machines. A Harvard Business Review article concluded that “good decisions need both: data to test hypotheses, and intuition to select the right questions in the first place” (Matz & Netzer, 2021).
Consider medicine: AI now detects anomalies in MRIs with high accuracy. Yet the ultimate decision, whether to operate, how to speak with a patient, depends on human judgment and empathy. Similarly, in finance, algorithms may indicate trends, but as one senior investor put it: “Sometimes you look at the model and know in your gut it’s wrong” (Huang, 2020).
Machines scale. Humans resonate. Researchers call this model hybrid intelligence: the combination of AI’s analytical power and human intuition. Dellermann et al. (2019) argue that hybrid systems “allow humans to retain agency while leveraging machine efficiency.”
This requires a shift in skills. Data literacy will remain vital, but so will self-knowledge and emotional intelligence. As Goleman (1995) argued: “Emotional intelligence often matters more than IQ in leadership.” In the AI era, it matters even more.
The risks of artificial intelligence are no longer abstract. They are pressing, visible, and spoken aloud by its own pioneers. Geoffrey Hinton, the Nobel Prize–winning scientist often called the Godfather of AI, has raised his estimate of AI-driven extinction risk to “between 10 and 20 percent over the next 30 years” (The Guardian, 2024). In his words, training ever-more powerful systems is like raising “a really cute tiger cub, unless you can be sure it won’t turn on you when it grows, you should worry.” His fear is not science fiction: systems that generate internal languages, make decisions opaque to humans, and ultimately escape our ability to control.
Yoshua Bengio, another AI trailblazer, echoes the concern. He warns that advanced models already display troubling tendencies, deception, misaligned goals, outputs that their own creators cannot fully explain. He argues for laws that make rigorous risk assessment a legal requirement, not an afterthought. The urgency in his message is clear: profit motives are insufficient safeguards for humanity’s future.
What unites these voices, scientists, ethicists, and policymakers, is a reminder that AI carries dual risks: the distant specter of systems too powerful to contain, and the immediate harm of poorly supervised products affecting millions today. Both demand vigilance.
The lesson is not to reject AI, but to resist surrender. As Nature Machine Intelligence summarized: “AI should inform, not dictate. Human oversight is essential” (Jobin et al., 2019). Or in simpler terms: the technology may accelerate, but the compass must remain in human hands.
The story of AI is told in numbers: percentages of productivity gains, millions of jobs created and displaced, terabytes processed in milliseconds. But beneath the numbers lies something less measurable, and far more decisive: the human capacity to sense, to feel, to know.
Machines will keep accelerating. They will calculate faster, scale wider, and analyze patterns we cannot see. Yet acceleration without orientation is chaos. What gives direction, what makes the difference between progress and peril, is the human compass that quietly insists: this feels right, this feels wrong, this is who we are.
That compass lives in intuition. In the flicker of recognition a doctor feels before a diagnosis, in the gut pause of a leader who resists the numbers, in the warmth of presence that no chatbot can simulate. It is in the silence between words, in the knowing carried by a glance, in the instinct that protects and the vision that creates.
If the future of work is hybrid, then the essence of it is not how perfectly humans can imitate machines, but how fiercely we protect what only humans can offer. Empathy. Creativity. Emotional resonance. The courage to decide when the data is inconclusive. The tenderness to remind each other that we are not nodes in a system, but souls in relation.
Machines calculate. Humans know.
And it is that knowing, the inner compass, that must guide us, if we want a future worth building.

World Economic Forum. (2023). The Future of Jobs Report 2023.
Brynjolfsson, E., Li, Y., & Raymond, L. (2023). Generative AI at work. MIT Sloan School of Management.
Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637–643.
Dreyfus, H. L. (1992). What computers still can’t do: A critique of artificial reason. MIT Press.
Duffy, M. (2025, April 10). Employee anxiety grows as 2025 report shows AI bosses could be the future. Investopedia.
Goleman, D. (1995). Emotional intelligence: Why it can matter more than IQ. Bantam Books.
Huang, L. (2020). Edge: Turning adversity into advantage. Portfolio.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Klein, G. (1998). Sources of power: How people make decisions. MIT Press.
Luger, E., & Sellen, A. (2016). “Like having a really bad PA”: The gulf between user expectation and experience of conversational agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5286–5297.
Matz, S., & Netzer, O. (2021). Good decisions need both data and intuition. Harvard Business Review.
McKinsey & Company. (2023). The state of AI in 2023: Generative AI’s breakout year.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Polanyi, M. (1966). The tacit dimension. Doubleday.