Are You Falling Asleep at the Wheel?

Why Critical Thinking Is the First Casualty of AI Adoption

🕒 3-minute read

Life’s been a bit different lately — I recently became a father.

While on paternity leave, between nappies, feeds, and the daily trip to the coffee shop, I took some time to finally get on top of our finances.

I uploaded our joint and personal bank statements into ChatGPT and asked it to help me build a budget.

It looked impressive at first. Clean categories, spending summaries, some smart-looking suggestions.

But then I started spotting things that didn’t add up.

Subscriptions I’ve never paid for. Coffee shops I’ve never visited. Entire purchases that didn’t exist.

Even though it had the actual data — it was hallucinating.

And it reminded me of something important: AI can sound convincing — and still be completely wrong.

If I’d followed that advice blindly, I could’ve spent more than our budget allowed — or worse… cut back on coffees and caramel squares with my wife.

Blinded by the Machine

During those quiet moments, I also read a fantastic book called Co-Intelligence by Ethan Mollick — one of the most practical and thought-provoking books I’ve come across on how to work with AI whilst staying the human in the loop.

One study in the book really stuck with me — because it shows just how easy it is to fall asleep at the wheel.

At Harvard Business School, researcher Fabrizio Dell’Acqua ran an experiment involving 181 professional recruiters. Each one was asked to review the same set of 44 job applications.

They were split into three groups:

  • One group received no AI support at all.

  • Another used low-quality AI with poor recommendations.

  • The final group was given high-quality AI — the kind you’d expect to boost performance.

But the results were surprising.

The recruiters with the best AI actually performed worse.

Why?

Because they stopped thinking.

The data showed they spent less time on each application. They reviewed fewer details. And they were far more likely to blindly accept AI suggestions without applying their own judgement.

They weren’t lazy — they were caught napping.

The better the tool, the more they trusted it. And the more they trusted it, the less they engaged.

Dell’Acqua called it:

“Falling asleep at the wheel.”

And this isn’t just about recruitment.

It’s a warning to every business function where AI is being used to support decisions — from marketing and finance to legal and operations.

The moment we stop questioning AI is the moment it becomes most dangerous.

Because AI doesn’t know your business, your values, or the real-world nuance behind a decision. It doesn’t carry responsibility for getting it wrong — you do.

The Real Risks

  • Loss of critical thinking
    We stop asking why. Judgement dulls. Skills fade.

  • No accountability
    When a poor decision is made, it’s easy to say: “The AI said so.” But someone still has to take responsibility.

  • Bias at scale
    AI learns from historic data. If that data includes bias, it can reproduce it and amplify it — unless we step in.

  • Legal and ethical exposure
    Especially in recruitment. If you can’t explain why one person was selected over another, you might face legal action — and no defence.

So, How Do We Stay Awake?

We need to build systems — and cultures — that keep humans in control. That don’t let us outsource our judgement. That treat AI as a partner, not a decision-maker.

Here are five practical ways to do that:

1. Keep humans in the loop

AI should assist decisions — not make them alone.
✅ This keeps people engaged and accountable.

2. Build AI literacy

Train your team to understand how AI works — and where it doesn’t.
✅ This sharpens judgement and reduces misuse.

3. Add friction on purpose

Use prompts, checkboxes or policies that force people to pause and reflect.
✅ This prevents mindless decisions.

4. Audit for bias

Test your AI tools regularly. Don’t assume fairness — prove it.
✅ This protects people, and your organisation.

5. Choose explainable systems

If people can’t understand how the AI made a decision, they can’t challenge it.
✅ Transparency is key to trust.

Final Thought

AI is not just about technology — it’s about people.

Used well, it can make work better, fairer, and more efficient. But if we stop thinking critically and follow AI blindly, we risk losing the judgement, fairness, and empathy that actually make us great at what we do.

AI powers the engine — humans stay at the wheel.

📚 Book Recommendation

If you’re navigating how to use AI in your work — or want to better understand how to stay human in the age of co-pilots and agents — I highly recommend Co-Intelligence by Ethan Mollick.
It’s smart, grounded, and refreshingly practical.
👉 https://www.amazon.co.uk/Co-Intelligence-Living-Working-Ethan-Mollick/dp/0753560771

The Human Shift is a weekly dose of clarity in a world of constant change.

We explore how work is evolving — and how people can thrive through it. From AI and leadership to culture, wellbeing, and recruitment, this newsletter shines a light on the shifts shaping the future of work.

It’s not about the tech. It’s about the people.

If you care about building better workplaces, navigating change with purpose, and staying human in a digital age — you’re in the right place.

Subscribe to stay in the loop.

Reply

or to participate.