Why AI Is Making People Lazier in 2026: A Technical, Level-Headed Look
A quiet pattern I have been watching for months
As someone who mixes a long-standing curiosity for the strange with a permanent bias toward evidence, I have spent nearly a year watching something that deserves a name: a slow, almost invisible drop in people's willingness to think for themselves. It is not dramatic. It does not fit in a photograph. It makes no noise. But it is there, measurable in serious studies and reproducible in my own routine whenever I let my guard down.
I am not going to blame artificial intelligence. It would be intellectually lazy — irony noted — to claim that a tool is responsible for the behavior of the person using it. What I am going to do is take the phenomenon apart the way you would take apart a case filed under "the unexplained": with data, uncomfortable questions and no tabloid language.
![]()
Source: Google DeepMind — Unsplash
1. Evidence, not intuition
Anyone who has investigated a strange phenomenon knows that personal hunches are not enough. You need replicable evidence. Three pieces of work published between 2025 and 2026 strike me as the most solid ground for thinking about this without exaggerating or downplaying:
- MIT Media Lab — "Your Brain on ChatGPT" (June 2025). Researchers from Pattie Maes's group recorded brain activity with EEG while groups of students wrote essays under three conditions: using only their own minds, using a traditional search engine, or using ChatGPT. The group that offloaded to the LLM showed the lowest neural connectivity between regions associated with reasoning, the worst recall of their own text weeks later, and a noticeably lower sense of ownership over the result.
- Microsoft Research + Carnegie Mellon (January 2025). They surveyed 319 knowledge workers about 936 real tasks performed with generative AI. The core finding: the more confidence participants placed in the AI, the less critical thinking effort they applied. Conversely, those who distrusted the AI a bit verified more, compared sources more and reasoned more.
- Pew Research 2025–2026 surveys. More than 60% of frequent chatbot users report preferring a direct answer over searching across several sources. Three years ago that share was below 30%.
Three different methodologies — electroencephalography, a professional survey, observed behavior — all pointing in the same direction. To my eye, that is signal, not noise.
2. Why this tool does what the earlier ones did not
There is a classic, reasonable objection worth taking seriously: "Same thing happened with the calculator, with GPS, with Google. Nobody went dumb. We are still here." It is a fair objection. My answer is that the difference with earlier tools is one of breadth and cognitive depth:
| Tool | What we offload | What we keep doing |
|---|---|---|
| Calculator | Arithmetic | Framing the problem, interpreting the result |
| GPS | Spatial memory | Choosing the destination, judging context |
| Search engine | Retrieving information | Asking the question, comparing sources, synthesizing |
| Modern LLM | Framing, synthesizing, writing, deciding, evaluating sources | ... hitting Enter |
An LLM does not automate one isolated link in the chain of thought: it can automate almost every link at once. If you do not have the habit of stepping out of the car once in a while, the muscle atrophies. That is not magic. It is cognitive physiology.
3. What changes in the brain (and what does not)

Source: Alex Knight — Unsplash
Let me turn the apocalyptic tone down here. Neuroscience does not say that a ChatGPT user loses neurons. What work like the MIT study shows is something subtler: the moment we offload a task, the brain-activity pattern associated with that task flattens. If the offloading becomes a permanent habit, neuroplasticity — which is indifferent to our good intentions — responds by reinforcing the circuits we do use and weakening the ones we do not.
Put differently: the brain does what any efficient biological system does. It reallocates resources. If we stop writing, writing gets harder. If we stop remembering, remembering gets harder. It is not a metaphysical punishment. It is metabolic accounting.
4. Three usage patterns showing up in 2026
Living with AI every day in my own work, I have identified three patterns that push toward cognitive laziness. I name them because naming them helps me catch them in real time:
Pattern A — "Ask the oracle"
You fire off a single broad question, accept the first answer as final, and move on. It is the equivalent of walking into a library, opening the first book you find, and leaving without opening a second one.
Pattern B — "Blind copy and paste"
Common among people just getting started with programming or writing. The model's output drops into the document without a critical read. It works — until one day it does not and nobody knows why.
Pattern C — "I'm offloading because I'm in a hurry"
The most dangerous of the three because it is reasonable. Calendar pressure pushes you to delegate everything delegable. The problem is that hurry, repeated for months, hardens into habit, and habit becomes the new floor where judgment used to live.
5. What this looks like in code
Developers are not immune. If anything, we are on the front line of the experiment. The two snippets below show the difference between a "passive" and an "active" use of an AI assistant on the same task.
1# Passive pattern — the assistant decides, I apply
2$ ai "fix this bug in login.ts"
3> Here is the fix...
4$ git apply patch.diff
5$ git commit -m "fix: login bug"
6# Result: the bug no longer appears. I also have no idea why it disappeared.
1# Active pattern — the assistant suggests, I verify
2$ ai "walk me through what this function does step by step, don't propose changes yet"
3$ ai "what hypotheses would explain this behavior? list three"
4$ # I review the hypotheses, discard two, test the third
5$ ai "check whether my fix covers edge case X"
6$ git commit -m "fix(login): timezone shift on token expiry"
7# Result: the bug is gone and I understand the system a little better.
Both workflows produce a commit that closes the ticket. Only one produces a programmer with sharper judgment tomorrow.
6. A simple ritual to keep the thread
The following script is trivial, but the ritual it enforces is not. I use it as a reminder: before accepting a model's answer, I write down what I asked, what I expected, and what surprised me or did not fit. It forces me to think after reading, which is exactly what hurry strips away.
1# interrogate.py — a critical-use journal for AI
2from datetime import datetime
3from pathlib import Path
4
5LOG = Path.home() / ".ai-journal.md"
6
7def log_entry(question: str, expected: str, surprise: str) -> None:
8 entry = f"""
9## {datetime.now().isoformat(timespec='minutes')}
10- **I asked:** {question}
11- **I expected:** {expected}
12- **What surprised me:** {surprise}
13"""
14 with LOG.open("a", encoding="utf-8") as f:
15 f.write(entry)
16 print(f"Entry saved to {LOG}")
17
18if __name__ == "__main__":
19 log_entry(
20 question=input("What did you ask the model? "),
21 expected=input("What did you expect to hear? "),
22 surprise=input("What stood out in the response? "),
23 )
Three prompts. Not elegant. It works because it forces you to process the answer instead of consuming it. After a few weeks with this journal, I noticed something concrete: by writing down what I expected and what surprised me, I started catching hallucinations — answers that sound right but are invented — much faster than before. A year ago that filter was looser.
7. The one historical parallel I think is useful
Here my other side comes in. I have spent years reading about strange phenomena and learning to separate what holds up under evidence from what just has good marketing. What strikes me about the AI situation is not the novelty, but the echo: the human temptation to outsource judgment to something that seems to know more.
In history, that figure has been the oracle, the augur, the unquestionable expert. The promise is always the same: "spare yourself the fatigue of thinking, I have already thought for you." And the consequence, when the promise is accepted without a filter, is also always the same: you lose touch with the very criteria that would let you doubt the answer.
The important difference with a temple oracle is that a language model does not claim to know. It only predicts the next most likely token given the context. If we treat it as an oracle, the problem is not the model. It is us. And as with any miscalibrated belief, the solution is not to destroy the tool — it is to change the relationship with it.
8. What is actually working in 2026

Source: Christin Hume — Unsplash
Not everything is drift. Some teams and individuals are running experiments that, as far as I can tell, are holding the effect at bay:
- "AI-off Fridays" on some engineering teams: one day a week with no assistants, to keep the underlying technical muscle alive.
- Mandatory peer reviews of any AI-generated code before it can merge, with a simple rule: "explain in your own words what this does".
- Critical-use journals, like the one in the previous section, now appearing in consultancies and research teams. Ridiculously simple, surprisingly effective.
- Universities that have stopped banning AI and started asking students to submit, along with their essay, a critique of what the model suggested and a justification for why they accepted or rejected it.
9. Conclusion: it is not the tool, it is the ritual
If I had to boil everything above down to one sentence, it would be this: AI is not making us lazy; it is offering us, for the first time in history, efficient laziness. The distinction matters. Efficient laziness feels like productivity. That is why it is hard to detect, and that is why it is worth naming.
The way out is not to go back to pen and paper. It is far less romantic: design your usage rituals. Decide what you want to think yourself and what you are willing to delegate. Verify what you delegate. Keep a record. Take a day off from assistants every now and then — not out of nostalgia, but for the same reason a runner trains uphill.
The brain, like any organ, answers what you ask of it. If in 2026 we ask little, 2027 will give us little back. If we keep asking it hard questions — even when an answer is one click away — it will keep sharpening. That much, at least, really is up to us.
Comments
Sign in to leave a comment
No comments yet. Be the first!
Related Articles
Stay updated
Get notified when I publish new articles in English. No spam, unsubscribe anytime.