7 minutes
AI sets a new anchor
You fire up Claude Code, describe a feature in plain English, and twenty minutes later it’s working in production. You lean back and think: “That would have taken me two weeks in 2019.”
You pat yourself on the back. A 40x productivity gain. You consider taking the rest of the day off.
That feeling will bite you.
The anchor you don’t realize you’ve set
There’s a cognitive bias called anchoring, the tendency to lean on the first piece of information you encounter when making a decision. For AI-assisted development, the anchor is your memory of how things used to be.
Every time you finish a task and reflexively compare it to the pre-AI version of yourself, you set an invisible ceiling on your ambition.
You built one feature in twenty minutes. Your brain does the math against the old baseline, floods you with accomplishment, and you stop there. You move on. Or worse, you take a break because you feel like you’ve earned it.
But here’s the question you’re not asking: could you have built three features in that same window? Could you have had the AI write the tests, the docs, and deploy for you? Can you create Agents that are finding potential bugs/features and bringing them to your attention?
You’ll never know, because you anchored on the wrong comparison.
The comparison that actually matters
While you’re basking in your 40x improvement over 2019-you, someone else is chaining agents together. They’re running systems that find bugs, write fixes, validate against a test suite, and deploy. All without a human touching the keyboard.
They’re not comparing themselves to who they were five years ago. They’re comparing themselves to what the tools make possible right now.
Two developers can both say they use Claude Code or Cursor. One uses it like a faster autocomplete. The other has restructured their entire workflow. They think in systems, not lines of code; prompt for architectures, not functions; let the AI handle the tedious 80% while they focus on judgment calls.
Both feel equally productive compared to their past selves. Compared to each other? Not even close.
The treadmill is accelerating
AI tools are not static. What feels blazing fast today will feel sluggish in twelve months. The twenty-minute feature becomes the two-minute feature, then the “why is a human even involved?” feature.
If your mental model is still anchored to 2019, you won’t notice you’ve fallen behind. You’ll still feel productive. But “way faster than I used to be” and “competitive in today’s landscape” are increasingly different things.
We’ve seen this movie before
The “technology will set us free” prediction has been wrong for almost a century. Every time, the logic sounds airtight: if we’re dramatically more productive, we’ll get the same output in less time.
- 1930: Economist John Maynard Keynes predicted we’d be working 15-hour weeks by 2030.
- 1956: Vice President Richard Nixon told a crowd that a four-day work week was coming in “the not too distant future,” thanks to machines eliminating “back-breaking toil.”
- 1962: The Jetsons depicted George Jetson complaining about his grueling two-to-three hour work days.
- 1965: TIME Magazine’s cover story on computers featured IBM economist Joseph Froomkin predicting automation would bring a 20-hour work week, creating “a mass leisure class.”
That is emphatically not what happened. In any industry. Ever. And it won’t this time either.
Computers didn’t give us the same output in fewer hours. They gave us dramatically more output in the same hours. Companies didn’t send people home early on Thursdays. They found new things to build, new markets to enter, new problems to solve. Expectations ratcheted up to match the new capabilities.
Email. Mobile phones. Cloud computing. Every major shift was supposed to make things “easier.” Every one instead redefined what was expected.
I’m not arguing whether this is morally right or wrong. I am saying this is how it works, and pretending otherwise puts you at a disadvantage.
The team expectations trap
This anchoring problem doesn’t just affect individuals. It distorts how entire teams think about productivity and compensation.
A team of five developers, with AI tools, is now producing 10x the output they generated three years ago. Same salaries, same hours. It’s tempting to ask: “Why should we produce way more while earning the same?”
Understandable. But it misunderstands the situation.
The team isn’t more valuable because they’re working harder. They’re more valuable because their tools are more capable. And every other team has access to the same tools. Producing 10x more isn’t extraordinary, it’s the new minimum for a team that’s effectively leveraging what’s available.
History is unambiguous: when new technology makes workers more productive, the expectation adjusts upward. The market doesn’t reward you for matching the new baseline. It rewards you for exceeding it.
The manager’s version
There’s a mirror-image version of this trap, and it might be more dangerous.
When a leader who remembers the old days sees their team ship in a day what used to take a sprint, their reaction is amazement. They’re anchoring on their own experience. Years in the trenches, grinding through the same work manually.
The result? They don’t push for more. They don’t ask if the team could’ve shipped three features, or if the implementation could be more robust, or the testing more comprehensive. The team gets praised for what is, in the context of modern tooling, merely adequate work.
A leader’s job is to calibrate expectations to what’s possible, not what’s impressive relative to the past. When the leader’s anchor is wrong, the whole team’s ceiling drops.
The “good enough” plateau
Anchoring doesn’t just limit your speed. It limits your quality.
Every engineer knows the old tradeoff. You had two weeks to build a feature. You spent the first week and a half getting it to 95%. Functional, meets the requirements, ships on time. That last 5%? Comprehensive tests, polished error handling, tight edge cases, proper docs. That alone would have taken another week. The marginal cost of going from good enough to great was brutal. So you shipped good enough, because that was the rational call.
AI has demolished that tradeoff.
The work that used to make “95% to 100%” economically irrational now takes twenty minutes. You can have the AI write the tests, handle the edge cases, generate the docs, and refine the error handling. All in less time than it used to take you to write a single unit test.
For the first time, shipping at 100% is actually realistic. And if you’re still shipping at 95% because that’s what you’re used to, you’re leaving the best part of this technology on the table.
Experience is a lever, not a cushion
Experience matters. But not for the reasons most people think.
The value of experience in an AI-augmented world isn’t war stories about how hard things used to be. It’s judgment. You know what “good” looks like. You spot subtle architectural mistakes. You understand second and third-order consequences. You know which corners can be cut and which will haunt you.
That judgment is valuable, but only if you point it forward.
The developer who uses twenty years of experience to aim AI tools at the right problems, at the right abstraction, with the right quality bar? Nearly unstoppable.
The developer who uses those same twenty years as a reference point for how impressed they should be? Plateau.
Experience is a lever. You can use it to lift more, or you can sit on it. The market will eventually make the consequences of that choice clear.
Stop comparing down
Change the question you ask after finishing a task. Instead of “how long would this have taken me before?” try: “what else could I have accomplished in this same window?”
When you finish a feature in twenty minutes, don’t stop. See if you can get the same outcome in 5 minutes next time. Ask it to suggest three ways the feature could be extended. See how far you can push AI to augment yourself.
Then zoom out. Can you describe the outcome and let the AI figure out the implementation? Can you chain tasks together? Can you set up workflows where AI handles the routine while you focus exclusively on decisions that require human judgment?
The developers who will thrive aren’t the ones fastest at the current paradigm. They’re the ones who keep pushing past each “good enough” plateau and refuse to let yesterday’s baseline define today’s ambition.
The twenty-minute feature is already becoming unremarkable. Imagine what you can build when you stop being impressed by it.