In my last video, I talked about why the AI era doesn’t just require new skills — it requires unlearning old ones. And a lot of you pushed back with the same question.
We know how to learn. We’ve been doing it our whole careers. But how do you actually unlearn something? And why does it matter enough to talk about separately?
That’s a really good question. And it deserves a real answer.
So today, that’s exactly what we’re going to get into. Why dumping old mental models is harder than learning new ones — and more important. How to actually do it. And the ten specific mental models you need to dump before AI makes the decision for you.
Let’s start with a story.
PART 1: WHY UNLEARN?
There’s a well-known story in Zen tradition about a scholar who visits a master to learn about Zen.
The scholar arrives full of opinions and knowledge. He begins talking — sharing his theories, his ideas, his existing understanding. The master listens quietly, then offers tea.
He begins pouring the tea into the scholar’s cup. The cup fills up. And then it keeps filling — tea spilling over the sides, onto the table, onto the floor.
The scholar watches, confused, then alarmed. “Stop! The cup is full. It can’t hold any more.”
The master sets down the teapot and says: “You are like this cup. Full of your own opinions and knowledge. How can I show you Zen unless you first empty your cup?”
I want you to hold that image for a moment.
Because here’s the thing most career advice gets wrong about the AI era: it treats the challenge as a filling problem. What new skills do you need to add? What tools do you need to learn? What certifications should you stack on top of what you already know?
But if your cup is already full — if you’re operating on mental models built for a different era — none of that new knowledge has anywhere to go. You’ll learn the tool and use it exactly the way you’ve always worked. You’ll prompt AI to produce the same outputs you’ve always produced. You’ll optimize the same workflows you’ve always run.
You’ll be the scholar pouring more tea into an already full cup. The real work of navigating the AI era isn’t filling the cup. It’s emptying it first. That’s what unlearning is. And it’s why it has to come before everything else.
Why This Feels Threatening
Here’s what makes unlearning genuinely difficult — and why most people avoid it.
Learning is additive. It builds on your existing identity. When you learn something new, you become a slightly more capable version of who you already are.
Unlearning is subtractive. It asks you to question the foundation. To look at something you’ve built your professional identity around and say — this no longer serves me. That’s not just intellectually uncomfortable. It can feel like a kind of loss.
The carriage driver who needed to learn to drive a car didn’t just have a skills gap. He had an identity gap. His entire sense of professional competence — everything that made him excellent, everything that earned him respect — was built around horses. Asking him to let go of that wasn’t just a training challenge. It was a self-concept challenge.
That’s the real reason people resist unlearning. It’s not laziness. It’s that our mental models aren’t just how we think — they’re how we see ourselves.
PART 2: HOW TO UNLEARN
The Science of Why This Is Hard
Let me bring in some science here, because I think understanding why unlearning is difficult actually makes it easier to do.
Daniel Kahneman, in his groundbreaking work on thinking and decision-making, describes two systems of thought. System 1 is fast, intuitive, automatic — it’s your brain running on pattern recognition. System 2 is slow, deliberate, effortful — it’s your brain consciously working through a problem.
Here’s the key insight: the more you do something, the more it moves from System 2 to System 1. It becomes automatic. It stops requiring conscious effort. Your brain builds what we commonly call muscle memory — not just physically, but cognitively.
Think about the last time you drove a familiar route. You arrived at your destination and realized you couldn’t quite remember the drive. Your brain had handled it on autopilot. System 1 was running the show.
Now think about your professional instincts. The way you approach a problem. The way you assess risk. The way you decide what a good piece of work looks like. The way you determine your own value.
Most of those instincts have been running on autopilot for years. They’ve moved so deeply into System 1 that you don’t even notice them anymore. They just feel like how things are.
That’s what makes them so hard to change. You can’t unlearn something you’re not aware of running.
The Practice of Unlearning
So how do you actually do it? Three things.
First: Name it.
You cannot let go of something you haven’t identified. The first practice of unlearning is developing the habit of noticing your own assumptions — particularly in moments when something new feels wrong or threatening or just slightly off.
That friction is a signal. It usually means a new reality is bumping up against an old mental model.
When you feel that friction — don’t dismiss it. Get curious about it. Ask: what am I assuming here that might no longer be true?
Second: Question the origin.
Most of our professional mental models were formed in response to real conditions — they worked, at some point, in some context. Expertise really was about information possession before search engines existed. Outputs really were the primary measure of value before AI could generate them.
Understanding where a mental model came from helps you see it as a response to a context — not a universal truth. And if the context has changed, the model can change too.
Ask yourself: when did I form this belief? What was true then that may not be true now?
Third: Replace, don’t just remove.
Nature abhors a vacuum, and so does professional identity. If you simply try to eliminate a mental model without replacing it, you’ll default back to the old one under pressure.
For every mental model you’re letting go of, you need a replacement that serves the new context. We’re going to cover those replacements as we go through each one today.
PART 3: THE 10 MENTAL MODELS TO DUMP
Alright. Let’s get into the ten mental models you need to dump before AI takes over the space they currently occupy in your career. I’m going to move through these with intention — some get more time than others because some are more urgent.
#1: Expertise means knowing more than others.
This is the deepest one. Most professional identity — especially at the senior level — is built on domain knowledge. You knew the industry, the data, the history, the nuances. People came to you because you had access to information they didn’t.
AI has largely ended that advantage. The person next to you can now query the same domain knowledge in seconds. Information possession is no longer scarce.
What replaces it: Expertise as judgment application. Knowing not just what the answer is — but which question is worth asking, what to do with the answer in this specific context, and when the conventional answer is wrong. That’s not something AI can replicate.
#2: Value means producing outputs.
The report. The analysis. The presentation. The code. Most professional identity is tied to the quality and volume of what you produce.
AI produces deliverables. Increasingly, it produces them well. If your identity is tied to output quality, you are in a direct and unwinnable competition with a tool that improves every six months.
What replaces it: Value as judgment on outputs. What you do with the deliverable — the synthesis, the decision, the stakeholder navigation, the contextual judgment that makes the output mean something — is where human value now lives.
#3: Seniority means having the answers.
Senior professionals are often rewarded for decisiveness. For projecting confidence. For having a clear point of view quickly.
In an environment where AI can generate ten well-reasoned perspectives on any question in thirty seconds, answer speed is no longer a differentiator. What matters now is the quality of judgment used to evaluate those answers — in context, with organizational nuance, with an understanding of what’s not in the data.
What replaces it: Seniority as quality of questions asked and judgment applied. The shift is from answer provider to judgment owner.
#4: Efficiency is the goal.
For decades, the professional imperative was to do more, faster. Optimize the workflow. Cut the meeting time. Reduce the cycle.
But efficiency asks “how do I do this better?” — and misses the more important question: “should I be doing this at all? And should it be me doing it?”
What replaces it: Effectiveness as the goal. Not how fast you do things — but whether the things you’re doing are the highest-leverage use of human judgment in your organization. AI handles efficiency. You own effectiveness.
#5: More input equals better output.
The instinct to gather more data, consult more stakeholders, run more analysis before deciding is deeply ingrained in professional culture. It signals diligence. It reduces the risk of being wrong.
But AI can generate infinite inputs. If you’re the kind of leader who always needs more information before deciding, you now have an infinite supply of reasons to delay. Judgment about when you have enough — and the confidence to decide with incomplete information — has become the scarce resource.
What replaces it: Calibrated decisiveness. Knowing how much input a decision actually requires, and moving when you have enough — not when you have everything.
#6: Specialization is protection.
Deep expertise in a narrow domain felt like career insurance for decades. The more specialized you were, the harder you were to replace.
AI is now the most specialized tool in every domain simultaneously. It can go deeper into your specialty than most humans can, faster than any human can.
What replaces it: Cross-domain synthesis as protection. The ability to connect insights across domains — to see what the customer success data means for product strategy, what the engineering constraints mean for go-to-market — is something AI does poorly. T-shaped professionals with genuine breadth are now more defensible than deep specialists.
#7: Visibility comes from doing.
Careers were built on output volume. The person who produced the most, shipped the most, contributed the most visibly — they got noticed, promoted, recognized.
In an AI era, everyone’s output volume increases. The production advantage disappears. And what remains visible — what actually differentiates — is your thinking. Your perspective. Your judgment on hard problems.
What replaces it: Visibility through insight. Being known for how you think, not how much you produce. This is a significant identity shift for high-performers whose self-concept is built around execution.
#8: Scale requires headcount.
The mental model that bigger impact requires bigger teams is one of the most deeply held beliefs in organizational life. More people, more capacity, more output.
A small team with the right AI leverage can now outperform organizations ten times their size. I’ve seen this firsthand. The constraint is no longer headcount — it’s the quality of judgment directing the leverage.
What replaces it: Scale through leverage architecture. The leader who knows how to design human-AI workflows — where to apply AI, where to keep humans, how to structure the handoffs — creates more impact than the leader who builds the biggest team.
#9: Risk management means saying no.
The organizational instinct to manage risk through process, approval layers, and deliberation served well when the cost of moving wrong was higher than the cost of moving slowly.
That calculus is inverting. In a fast-moving AI environment, the risk of being six months behind a competitor who moved faster is now often greater than the risk of an imperfect decision.
What replaces it: Risk management as speed calibration. Knowing which decisions require process and which require speed — and having the judgment to tell them apart — is the new risk management competency.
#10: Learning is additive.
The assumption that career development means stacking new skills on top of existing ones — that growth is always accumulation — is perhaps the most subtle mental model on this list.
Because it means most professionals approach the AI era as a learning challenge: what do I need to add? The better question is: what do I need to subtract first?
What replaces it: Learning as curation. Actively deciding which mental models to retire, which capabilities to let AI handle, and which uniquely human strengths to invest in. Growth in the AI era is as much about letting go as it is about acquiring.
CLOSE
Ten mental models. Ten things that served you well — and may now be working against you.
I want to be clear about something before I close.
Dumping these mental models isn’t about dismissing everything you’ve built. Your domain knowledge, your judgment, your experience — those are real assets. The goal isn’t to start over. The goal is to free those assets from the frameworks that are constraining them.
The cup doesn’t need to be thrown away. It just needs to be emptied — so it can be filled with something that actually serves where you’re going.
That’s the work. And it starts with awareness. Go back through those ten mental models and honestly ask yourself — which ones am I still running on? Which ones do I need to dump before AI takes over the ground they’re standing on?
Your answer to that question is where the real shift begins.
If this resonated, I’d love to hear which mental model hit closest to home for you. Drop it in the comments — I read every one.
And if you want to take this further — I’m running a free live session on Maven called How To Make The Career Move AI Can’t Automate. We go deep on the frameworks for what comes after the unlearning — how to reposition, how to rebuild, and how to build a career that compounds as AI advances.
Here is the link: https://maven.com/p/5b8e86/how-to-make-the-career-move-ai-can-t-automate
See you in the next one.








