Pascanilai

← Index

What AI Can't Give You

By 2 min read#ideas#ai

Been thinking about where the edge is now that everyone has the same AI. If we're all prompting the same models, the moat can't be "I can write code" or "I can summarize a paper." It has to be something the model doesn't have.

Two things I keep coming back to.

The first is information the model never saw. This breaks into a few kinds. There's the specific stuff about whatever you're working on, your app, your users, your metrics, which you have to feed in before you can get anything useful out. There's anything past the knowledge cutoff, which matters more than people think. The AI will still tell you to grind LeetCode to get a software job, because in its training data that was true. It doesn't know that vibe coding with Claude Code has already changed what a junior engineer's day looks like. And then there's insider knowledge, the kind that lives in private Discords, in real conversations you had with someone who actually does the thing, in your own taste for how an interface should feel. None of that is in the weights.

If your work depends on information everyone already has, AI is going to do it faster than you. If it depends on information you collected yourself, AI just makes you stronger.

The second moat is books, or really anything you drill deep enough that you can act on it in real time.

The problem with AI isn't knowledge, it's latency. If I'm in a negotiation and the other person makes a move, I have maybe two seconds to respond. I can't pull out my phone and ask Claude what to say. Same for a first date, a sales call, a founder getting one question from an investor. The decisions that matter most in life happen on a timeline shorter than a prompt.

That's where a book earns its keep. If I've read enough on game theory, or negotiation, or whatever the domain is, the response is already cached in me. System 1, not System 2. AI can prep me before and debrief me after, but in the moment I'm on my own, and what's sitting in my head is the only thing that gets to play.

So the moat isn't reasoning. The model reasons fine. The moat is the stuff you collected before the conversation started and the stuff you practiced until you didn't have to think about it. Private data in, rehearsed judgment out. Everything in between, the part everyone's trying to automate, is going to commodify fast.