Essays
Is AI Making Us Dumber?

That is a fair question, but is it really what bothers us?
It seems necessary to investigate what precedes this doubt in our minds. Because is that really all we want to know?
Delegating to AI the homework, the understanding of a long email thread, or the creation of a new algorithm… It kind of feels like a form of cheating, right? At the end of the day, was it me who solved the problem or was it the AI? Where is my ability in all this? What did I learn from it?
I realize that in questions related to “AI making us dumber,” there are hidden issues or more essential concerns that we need to investigate before giving a “yes” or “no” answer.
Of course, people don’t like to be perceived as dumb or, at the very least, less intelligent, as humanity has long realized, and later, well represented in the direct language of Schopenhauer:
“Nothing arouses as much hatred as the perception of another’s intellectual superiority.”
Along the lines of what the notable German philosopher Arthur Schopenhauer expressed, people accept very well that others might be stronger, more beautiful, or wealthier. But we have a strong tendency to reject others surprising us by demonstrating that they are more intelligent than we are. Therefore, I also believe there is a genuine concern about losing this quality we prize so much: being smart.
Learning vs. solving faster?
So “becoming dumber”—meaning losing skills or stopping acquiring them—feels like a real and deeply personal threat. I’ve noticed, at least at the time of writing this essay, that developers are afraid to admit they play around with vibe coding. It exposes them publicly as someone who resorts to shortcuts, revealing intellectual laziness or something similar. There is indeed an argument about the potential for garbage, noise, bloat, and fragility in AI-generated code (at this moment), and this strengthens the case for not delegating code generation to an IDE: you might not understand the code later, or worse, not know what else is hiding in code you don’t master—fragility, represented in critical vulnerabilities or bugs.
A developer who relies heavily on AI-generated code today would be delegating not just their direct involvement with the outcome, but also giving up knowledge and other intellectual competencies.
Is knowing how to apply it the way out?
There is also a line of argument originating from people in the software engineering sector, which is “knowing how to prompt the AI.” Knowing how to specify and how to evaluate where the AI-generated code is heading are necessary actions to avoid problematic code or functional results that don’t please us. This idea of how to best use AI-generated codes aligns with “whoever has the knowledge and knows what they are doing can be more careful and critical” to avoid creating AI Slop.
The notorious AI Slop
A quick context for you who have come across the term AI Slop, have an idea of what it is, but are still unsure of what it represents. It is AI-generated content with some of these characteristics:
- It is of low quality, either technically or functionally
- Lacks appropriate human review
- Holds no real value or depth
- Forced integrations that frustrate the user
- Contains hallucinations or distortions of reality
Easy Access + Rush + Lack of Context = Guaranteed AI Slop
Above, I playfully tried to create a super-simplified formula for generating AI Slop. But if I can use a few more characters, I think I have an even better formula (still simplified, but improved):
AI Slop = Access to AI + Rush for results + Low critical thinking + Lack of context + Low responsibility + Ego
That’s it. I will end my digression on AI Slop here. I should explore this in another essay. Let’s get back to the topic: “Is AI making us dumb!?”
And what does it mean to be dumb anyway?
You would [only mentally, of course] call someone dumb if they demonstrated a lag or flaws related to:
- Intelligence: logical reasoning, abstraction, memory, response speed.
- Knowledge: study, formal knowledge, information.
- Wisdom: judgment, prudence, connecting experiences and knowledge.
- Executive functions: self-control (impulses or emotions), planning, organization, focus.
I am no professional or expert on the human psyche—far from it—but I know that “Dumb” is not a scientific term. But trying to summarize what “Dumbness” is here, so we share the same starting point in this discussion, it’s important to bring up what its parameters are.
So, what’s the verdict? Is AI really making us dumb?!
Yes! There. Solved.
Well, if you’ve read this far, you’re already expecting that the answer wouldn’t be exactly that.
Expecting genuine frankness from you, the reader of this essay, I ask that you reflect on the aspects I mentioned in “And what does it mean to be dumb anyway?” and answer to yourself whether, by using AI in your day-to-day life, you are losing something related to Intelligence, Knowledge, Wisdom, or Executive functions. If the answer is “Yes,” then AI is making you dumb; if the answer is “No,” then AI is not making you dumb; If the answer is “It depends,” then you’re going through a mixed transformation, losing and gaining at the same time.
A mixed experience with AI. Between the rust and the shine
Let’s say that by using AI to delegate cognitive effort and stopping learning about something, you are achieving better results—or even—creating opportunities to understand things you wouldn’t have had easy access to before.
You might be stopping acquiring some skills, yes, that’s a fact! But you might be acquiring others; it’s a trade-off.
Mentally, I categorize this as Rust VS Shine. You rust in some aspects, but you shine in others.
At the moment I write this essay, there is an absurd amount of people developing their own working tools, producing, and improving themselves in an unprecedented way. There are developers creating projects in programming languages they do not master; they rust their learning related to that language, but they shine by leveraging their knowledge in a way that exceeds their own cognitive and effort limits.
The tailor’s metaphor
When sewing machines and early textile industrialization became popular, many classic tailors and master artisans felt more than a justified financial threat: they felt profound technical contempt for the novelty. Sewing custom clothing required calculation and anatomy, absolute focus, years of study, and millimeter precision. For many of the old guard, the machine wasn’t an evolution; it was a lazy shortcut that stripped the craft of its essential intellectual demand.
The deepest wound, however, appeared when looking out on the streets and noticing that regular people were wearing consistent coats and pants, made at scale. The uncomfortable truth exposed there was that the vast majority cared little for mastery, the mental genius of the artisan, or the effort invested in cutting each lapel. People simply needed to not be cold, and they needed more general access to clothing. For the general population, the complexity of the process became irrelevant as long as they had access to good clothes, practicality, and better access to attire.
Once again, what the public demanded was not an intelligence certificate from the maker, but the delivery of the outcome. Whether the tailors liked it or not…
Landing back in our case
Regarding the use of AI, whether professionally or personally, we are exploring ideas, creating tools, and learning new topics. But like the tailors, who had the refined and desirable knowledge to produce and adjust clothes with mastery, this is no longer common; the number of tailors per capita is highly reduced today, an almost extinct profession. The accessibility of tools, production processes, generative AI, and whatever else you want to put on this list, is indeed removing the need for a massive scale of professionals who know the craft, the trade. We are jumping to a production level of content and tools where this specialized knowledge is being diluted, facilitated, and becoming accessible to a large part of the global population.
Understanding “The Tailor’s Metaphor,” I think I should stop worrying if AI is making me dumb. We are in a moment of great transformation, which has not yet reached its plateau. The tailor’s mindset will not help me adapt to the current reality, let alone the future.
We live in transition. Sometimes it is hard to participate in them.
The expectation towards programming work has changed from one era to another. Today, telling someone they would have one college semester dedicated to electronics and another semester to low-level languages would be crazy!
A brief, ultra-super-mega artificially simplified and informal history of programming trends:
- 1940s — Operator | Programmed physically | Machine language | Focus: hardware
- 1960s — Programmer | Early high-level | Procedural | Focus: algorithms
- 1980s — Software Eng. | Structured high-level | Modularization | Focus: process
- 1990s — Developer | Object-Oriented | Abstraction | Focus: applications
- 2000s — Web/Eng. | High-level + web | APIs & services | Focus: internet and scale
- 2010s — Data/Eng. | High-level + data | Distributed & ML | Focus: metrics
- 2020s — AI Engineer | Code + natural language | Model orchestration | Focus: context and AI
Here’s a super-mega-blaster informal simplified summary of the evolution of technologies in computing and software:
- Physical → manipulated hardware
- Binary → instruction by instruction
- Procedural → logical sequence
- Objects → modeling the world
- Services → modeling distributed systems
- Data → probabilistic modeling
- Models → semantic modeling
Let’s change the question
I want to suggest a question replacement: instead of “Is AI making us dumber?”, let’s ask ourselves:
- Which project have I always left in the drawer because I lacked all the necessary technical knowledge, but now, with artificial intelligence, I can get off the ground?
- If I no longer have to spend hours memorizing syntax or configuring basic infrastructures, how far can my creativity go?
- What new layer of value can I add to my work now that I can outsource the ‘manual labor’ of code?
- How can I become an excellent orchestrator, evolving from someone who merely tightens the screws into someone who designs the entire machine?
- How can I surgically use AI only for the boring parts of the process, preserving the ‘artisanal’ part solely for what I truly enjoy building?
Looking at the big picture, regardless of our desires regarding the impacts of Artificial Intelligence, once we change the question, we realize we are losing our monopoly on manual sweat. In exchange, we gain free time and free minds to be the architects of a new era.
There is still more to discuss
When I say “we realize we are losing our monopoly on manual sweat,” I know there is an aspect of employability: how necessary are we in this scenario of massive optimization? Work positions and even entire professions will be categorized as “tailors.” I know this, and I don’t want to only paint the bright side of the transitions we are living through.
Take the example of the company Block, founded by Jack Dorsey (co-founder of Twitter). On February 27, 2026 (the week before I wrote this essay), they laid off 40% of their workforce. That 40% represents about 4,000 employees. And according to Dorsey himself, this mass layoff was driven by the growing adoption of artificial intelligence at Block.
So there. The other side of this modern marvel that is AI—in whatever form it comes: gen AI via chat, AGI, agentic AI, ASI—is also recorded here. There are gains and losses. But that’s a topic for another essay. I will wrap this one up here.