People often say that how someone talks to others reveals more about themselves than the person they’re addressing. Similar behavior emerges when people discuss their experiences with LLMs like Grok, Claude, or ChatGPT.

The most common complaint goes something like this: “I asked how many R letters are in ‘strawberry’ and the LLM got it wrong.” Or: “I asked a question from my actual work and the AI couldn’t answer it, so clearly these systems can’t surpass human capabilities or take our jobs. We’re safe.”

If you think this way, reality is going to be ugly.

Every inference provider – usually AI labs – runs some sort of router between you and their model. When you send a prompt, that question gets analyzed to determine how complex it is and how much “brain power” is required. Simpler questions route to simpler models. More advanced questions route to more advanced models.

Your subscription tier also matters. Pay less and your questions might hit a distilled model. Pay more and you get ChatGPT-5 Pro. Free-tier users get leftovers. The reasons are economic, obviously.

This makes evaluating AI capabilities harder than just asking questions about your job. When colleagues discuss work, their recorded conversations might sound like prompts you could feed into an LLM. But the crucial difference is uncommunicated context.

Consider this: explaining an issue to a colleague on another continent over video call requires more depth than explaining the same issue to someone you work with daily. The local colleague already shares your context.

Latest models include memory features that “remember” previous conversations. So when asked about cyber intelligence, a model gives me very different answers than it would give you, because it knows I run a company specializing in cyber intelligence. The model assumes my questions relate to Cyber Intelligence House’s scope.

If I want to discuss the topic academically, I have to either disclose that I’m approaching this as a researcher, or start a new “private” conversation where the model ignores previous interactions.

Understanding how to provide the right context when interacting with LLMs – or any AI model, including photo editing tools – determines success. Users who figure out the right amount of context get consistently better results.

So it’s not always that the AI is dumb or the question is dumb. As I often say in the classroom: there are no stupid questions, just stupid people asking questions.

That’s the key to improving your abilities. Often the solution is actually looking in the mirror.