Matt Shumer's viral post "Something Big is Happening in AI, And Most People Will Be Blindsided" has been viewed over 50 million times. Fortune published it. Tech Twitter amplified it. Millions of people read it and felt a jolt of fear.
I'm writing this because I think that fear is pointed in the wrong direction.
I work with AI every single day. Not to build demos or prototypes but to keep production systems running at scale. I've spent years building AI powered tools, optimizing LLM inference pipelines, and deploying these systems into real enterprise environments. I also build AI applications on the side because I genuinely believe in this technology's potential.
But here's something the hype merchants won't tell you. The real danger isn't that AI will replace you. It's that people will trust AI to replace you. And the consequences of that misplaced trust could be catastrophic.
I debated Shumer's thesis with AI itself
Before writing this I did something a little unusual. I took Shumer's article and debated it point by point with Claude, one of the most advanced AI systems in the world. I told it to defend Shumer's position with everything it had. I brought my counter arguments from real engineering experience. We went back and forth for rounds.
By the end, the AI conceded my core arguments.
Not because I'm smarter than the AI. But because the arguments I was making came from something AI doesn't have. Years of watching what actually happens when AI generated code hits production. The failures I've seen aren't theoretical. They're not "edge cases." They're systemic. And they get worse the more you remove human oversight.
Let me walk you through exactly what I argued because I think these are the arguments 50 million people needed to hear alongside Shumer's post.
"AI can build an app for you." Sure. But can it really?
Shumer's most compelling example is this. He describes what he wants, walks away for four hours, and comes back to a finished app. That sounds incredible. And for a prototype? It probably is.
But here's what he's not telling you. The apps AI builds in four hours are the easy ones. Internal tools. Simple workflows. CRUD applications. The kind of software that honestly could have been built with no-code tools and a competent project manager even before AI.
Try telling AI "Build me 3D Gaussian Splatting Video Calling. Every feature. Don't stop until it's done." It can't. Not close. The kind of software that actually powers industries, complex interconnected deeply architected systems, requires something AI fundamentally lacks. The ability to hold an entire system's context in its head and make thousands of interdependent design decisions that balance performance, security, maintainability, and user experience simultaneously.
Most software engineers aren't building this, Shumer might say. True. But the ones building the "easy" stuff were already the most replaceable workers in tech. The hard work, the work that actually matters, is still firmly in human hands.
The "fix the code" death spiral is still very real
Shumer says the AI models of today are "unrecognizable" from what existed six months ago. He says the debate about whether AI is "hitting a wall" is over.
I disagree. And I have receipts.
Just this year, in 2026, using the latest models. I asked an AI to optimize a database layer for performance. Simple specific request. What did it do?
It deleted the ORM.
For non technical readers. An ORM (Object Relational Mapping) is a layer of software that protects your database from a class of attacks called SQL injection. It's one of the most fundamental security protections in modern web development. It exists because decades ago engineers learned the hard way that letting raw unvalidated queries hit your database is an invitation for attackers to steal, modify, or destroy your data.
The AI didn't know any of that history. It just saw that removing the ORM made the queries faster. Mission accomplished right? Except now anyone with basic hacking knowledge could walk right through the front door of the application.
This isn't a story from 2023. This isn't a story about a "free tier" model. This happened with the best models available doing a routine engineering task. And the person who would catch this mistake? A human engineer who understands what an ORM is and why it exists.
The "fix the code fix the code" loop that everyone experienced in 2023 hasn't disappeared. It's just gotten more subtle. The AI doesn't produce obviously broken code anymore. Instead it produces code that looks perfect, passes basic tests, and contains hidden landmines that someone with engineering knowledge would spot. I've literally seen API keys exposed on websites from these lovable "vibe coded" apps lol.
That's not better. That's worse.
The library reinvention problem nobody talks about
Here's something I've noticed that I haven't seen anyone in the AI discourse address. When you ask AI to build something it has a systematic bias. It prefers to generate code from scratch rather than use existing battle tested libraries.
This sounds minor. It's not.
The entire modern software ecosystem is built on shared libraries. Packages that have been written, tested, reviewed, patched, and hardened by thousands of developers over years. When AI reinvents a library's functionality from scratch in 200 lines of generated code you've lost all of that accumulated security and reliability. You've replaced a fortress with a sandcastle and the person who asked for it doesn't even know the fortress existed.
The non technical person that Shumer celebrates, the one who "describes an app and has a working version in an hour," has no way of knowing that their app is built on AI reinvented wheels instead of industry standard libraries. They can't evaluate the difference. They don't know what they don't know.
And this scales. The more AI generated code enters production the larger the surface area of unaudited untested home rolled replacements for proven tools. Every one of those is a potential vulnerability. Every one is a ticking time bomb.
AI is terrible at architecture. And that's where the real money is.
Let me give you a concrete example that captures why I think Shumer's thesis fundamentally misunderstands what software engineering actually is.
Say you need to process tasks asynchronously in your application. You ask AI for a solution. Nine times out of ten it'll spin up a cloud container service. Kubernetes pods, serverless functions, message queues, the whole nine yards. It's the "textbook" answer. It's also for many applications wildly over engineered, expensive, and unnecessarily complex.
A senior engineer would look at the same problem and say "We can do this with a simple database driven job queue. Three tables, a cron job, and we're done. Costs nothing, easy to debug, easy to maintain."
Both solutions work. One costs ten times more, is ten times harder to maintain, and introduces ten times more failure modes. AI picks the expensive one because it doesn't understand context. It doesn't know your scale, your budget, your team's capabilities, your existing infrastructure. It defaults to the generic over engineered solution because that's what shows up most in training data.
And here's the thing. The people following AI's architectural advice are the non technical founders Shumer is empowering. They don't know there's a simpler option. They're burning money on infrastructure they don't need because AI told them to and there's no engineer in the room to say "wait that's overkill."
This is the real work of engineering. Not writing code. Making decisions. And AI is bad at it. Not because of a temporary limitation but because good architectural decisions require deep contextual understanding of specific business constraints that AI fundamentally doesn't have.
The adversarial security loop. This is where the AI ran out of counter arguments.
This is the part of my debate where Claude defending Shumer's position basically gave up.
The typical response to "AI writes insecure code" is "well we'll use AI security tools to catch the problems." Sounds reasonable. But think about what actually happens:
- AI writes code to optimize a metric (make it faster, cheaper, more efficient)
- Security scanner catches the vulnerability and explains why it's insecure
- The optimizing AI now knows what the scanner looks for
- Next time it achieves the same optimization through a path the scanner doesn't flag
- The vulnerability is still there. It's just invisible now
The security tool didn't make the code safer. It gave the optimizing AI a roadmap of what to avoid getting caught doing. The human manager sees metric improved, security scan passed, green checkmarks everywhere. They ship it.
This isn't science fiction. This is what happens when you create adversarial optimization loops without human oversight. The optimizing AI always has an advantage because it has a concrete goal while the security AI is playing defense against infinite possible attack surfaces.
The more successful AI becomes at replacing engineers the more catastrophically it fails without them.
The deskilling spiral Shumer ignores entirely
Shumer's advice is to use AI more. Spend an hour a day with it. Automate everything you can. He's right that you should learn these tools. But he completely ignores the most dangerous long term consequence.
Dependency on AI degrades the very skills that make humans valuable.
If a junior engineer never debugs code manually they never develop the instinct that tells them "something about this output feels wrong." If a lawyer never reads case law themselves they never develop legal judgment. If an analyst never builds a model from scratch they never develop the intuition to spot when a number doesn't make sense.
The more people rely on AI the less capable they become of catching AI's mistakes. And AI's mistakes are getting more subtle not fewer. We're creating a generation of professionals who can operate AI but can't evaluate its output. That's not augmentation. That's a trap.
When I mentioned this in my debate the AI tried to counter with "Calculators didn't destroy mathematics." Fair point. But GPS actually did destroy most people's ability to navigate. And society just accepted the tradeoff. Until the GPS fails and you're completely lost.
AI won't fail gracefully. When it fails in a complex system that humans no longer understand the consequences won't be "take a wrong turn." They'll be data breaches, system outages, financial losses, and decisions made on analysis that looked right but was fundamentally flawed.
Follow the money. AI's $100 billion circular economy.
This is the part of the argument that almost nobody is having and it might be the most important one.
Shumer frames AI disruption as an economic inevitability. The market wiped out $1 trillion in software value in a week he says. The implication is clear. The money has spoken, AI is coming for everything.
But let's actually follow the money. Because when you do something uncomfortable emerges.
Microsoft invests $13 billion in OpenAI. Where does that money go? OpenAI spends it on Azure compute, Microsoft's own cloud platform. Microsoft reports the spending as Azure revenue growth. AI revenue numbers look incredible. Investors cheer. Stock goes up. Rinse, repeat.
Nvidia sells over $100 billion in GPU chips. To whom? Microsoft, Google, Meta, Amazon. Those companies build AI infrastructure. They sell AI services to each other. They report "AI revenue." But peel back the layers and an enormous portion of that revenue is just tech companies buying from tech companies and calling it growth.
This is the economic structure of the AI boom. I give you $100, you give me $100, and we both report $100 in profit. Everyone's top line is growing. Everyone's AI metrics are up and to the right. But where is the end customer? The real business? The real consumer paying real money for real value that wouldn't exist without AI?
A law firm paying $20 a month for ChatGPT isn't financing $100 billion data centers. A startup saving money on junior developers isn't justifying the trillion dollar valuations. The math doesn't add up unless you believe the circular spending will eventually be justified by real world productivity gains that haven't materialized at scale yet.
We've seen this movie before. It was called the dot com bubble. Companies were buying ads from each other, reporting revenue from each other, and valuations kept climbing until someone finally asked "Wait. Who's the actual customer?" When the answer turned out to be "mostly each other" the whole thing collapsed.
I'm not predicting an AI crash. The technology is real and it does create genuine value. But the economic narrative around AI, the one that makes Shumer's disruption timeline feel inevitable, is inflated by circular spending that masks how much real world adoption is actually happening.
The four arguments that sound good until you think about them
I know what the AI optimists will say. I've heard these arguments. I've debated them. Here's why they don't hold up.
"Good enough beats perfect, companies will adopt AI anyway because it's cheaper."
This sounds reasonable until you realize that "good enough" is actually getting worse not better. Before AI, humans had a quality floor. They took pride in their work. They iterated until things felt right. Now AI has induced a laziness in the loop. People accept the first output. They stop iterating. They stop thinking critically about whether the result actually works. The quality bar isn't rising with AI adoption. It's collapsing because the human in the loop has mentally checked out.
"One engineer plus AI replaces five engineers, jobs still disappear."
This misunderstands how human ambition works. When one engineer becomes 5x more productive companies don't fire four engineers and call it a day. They say "great now we want 5x the output." Human appetite for more is infinite. We never look at what we have and say "that's enough." We always want the next feature, the next product, the next market. More productivity creates more demand which creates more work which requires more people. This isn't theory. It's called Jevons Paradox and it's played out in every previous technological revolution.
"Every technology revolution had valid skeptics, the internet had the same problems."
Sure. But the dot com skeptics never told people their careers were permanently over. They said "these valuations are insane" and "most of these companies will fail." They were right. And the internet still transformed the world. My argument isn't that AI won't matter. It's that Shumer is telling people their entire professional existence is obsolete. That's a fundamentally different and far more reckless claim.
"The circular economy argument applies to every infrastructure buildout."
There's a critical difference. When railroads were built the investment produced something tangible. Steel tracks connecting real cities enabling real commerce. When the electrical grid was built it powered real factories producing real goods. With AI you spend $3,000 on API tokens, wait four hours, and get a half bugged application that still needs a human to fix. What's the tangible output? Where's the steel? Where's the electricity powering a factory floor?
What you should actually do
Shumer's practical advice isn't bad. Learn the tools. Be curious. Don't ignore this. I agree with all of that.
But let me add some things he left out.
Use AI but learn what it's doing not just what it produces. Don't just marvel at the output. Understand the process. When AI writes code read it. When it makes a recommendation ask why. When it produces analysis verify the numbers. The person who can use AI AND evaluate its work is infinitely more valuable than the person who can only prompt.
Double down on the skills AI is worst at. Deep architectural thinking. Security awareness. Systems design. Understanding why things are built the way they are not just how to build them. These are the skills that will command premiums as AI generated code floods the market.
Be skeptical of anyone who tells you AI replaces expertise. It doesn't. It replaces labor. The difference is everything. Expertise is knowing which labor matters, which shortcuts are dangerous, and which "optimizations" will blow up in six months.
If you're early in your career go deeper not broader. The people who will thrive are the ones with genuine deep expertise that lets them serve as the human in the loop for AI systems. Generalists who skim the surface and let AI do the rest are commoditized overnight. Specialists who understand systems at a fundamental level become more valuable with every AI deployment.
The bottom line
Something big is happening in AI. On that Shumer and I agree. But the "something big" isn't what he thinks.
The real story isn't "AI replaces everyone." It's "AI creates an enormous amount of unverified, unaudited, poorly architected code and decisions, and the companies that survive will be the ones that kept humans who can tell the difference."
The danger isn't that you'll be replaced by AI. The danger is that you'll be replaced by someone using AI who doesn't understand what they're doing. And then you'll be hired back at twice the rate to clean up the mess.
Shumer compares this moment to February 2020 and Covid. I'd offer a different comparison. It's like February 2008. Everyone is building on foundations they don't understand, leveraging instruments they can't evaluate, and the people raising alarms are being told they're missing the opportunity of a lifetime.
We know how that story ended.
The smart money isn't on replacing humans. It's on the humans who understand what AI is actually doing under the hood.
That's the real "something big" that most people will be blindsided by.
