Over-Reliance on GenAI impacts Critical thinking; Here's how I think we can fix it

Note: The article addresses to Software Engineers, but the ideas apply to knowledge workers in every domain.

I've spent considerable amount of time using GenAI past year at work. Also, having spent time with many power users, I begin to see an interesting trend.

Engineers are increasingly using GenAI to accomplish wide range of tasks, from advanced software engineering problems, to drafting a simple slack message.

The AI having digested the whole of internet, generates very reasonable responses(most often) – which increases the user's confidence, and makes them rely more on the tool. Thus helping software engineers accomplish more in less time.

But here's something that we fail to acknowledge: In the pursuit of increased productivity, we delegate some or most of the critical thinking to the GenAI.

In April 2025, Microsoft Research Labs published a paper – The Impact of Generative AI on Critical Thinking, where they studied the impacts of using GenAI by knowledge workers.

Here's a quote from the research paper:

while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.

I understand the concerns raised by the researchers in the study, and find them fairly reasonable.

The engineers are increasingly reviewing code written by Generative AI, than writing their own. This is a significant mindset shift from “Problem Solving and Execution” to “Task stewardship and verification”.

I strongly believe this track could seriously impede critical thinking amongst engineers in long term. We could become Armchair critics (Offering judgements and opinions without having enough involvement or experience), thus questioning the quality and usefulness of the code review.

After thoughtfully contemplating on this problem for a while, I came up with an approach to address this as a consumer. And I recommend every engineer to consider.

Moving forward, AI is inevitable in workplaces, and I don't see future of workplace without them. There will be a push for productivity from the management, and AI has shown potential to double, triple or 10x the speed of an average engineer.

The challenge is to be more productive and not lose skill. Here's what I think an engineer using AI should do:

Go on an AI Detox for a week every 2 months.

Rules for the AI Detox week:

  1. No AI tools should be used
  2. Any tools that existed during the Pre-ChatGPT era can be used
  3. Engineer benchmarks their performance on various tasks/metrics without using AI
    • Time it takes to complete a task
    • Critical thinking and Problem solving abilities
    • Skills check (Writing, Coding, Design, Reading, Communication, etc)
    • Domain understanding (Strengths and Weakness)

At the end of the Detox week, perform a careful analysis on the benchmarks and observations during the week. Compare it against how they were performing with AI.

Use this information to carefully tackle and resolve any weaknesses found and bridge the gaps. AI tools can themselves be used to help achieve this.

(I'm working on another article about: how I use Gen AI/LLM tools to learn and think critically).

This approach strikes a right balance between both the worlds, achieving the productivity while UpSkilling and improving critical thinking at the same time.

Do share your thoughts and comments.

Discuss...