How you ask AI matters — 5 common mistakes and how to fix them
AI analyzes your recent conversations to identify framing patterns that might be limiting the quality of responses you get.
How you ask AI matters — 5 common mistakes and how to fix them
The quality of what AI gives you depends directly on how you ask. This isn't unique to AI — it's the same principle behind a good patient history. A doctor who hears "I've been feeling really tired lately" is working with almost nothing. A doctor who hears "For the past three weeks, I've been waking up exhausted after eight hours of sleep, and it started when I changed my work schedule" has something to investigate.
Research on clinical communication published in The BMJ consistently shows that the specificity and structure of how patients describe symptoms directly affects diagnostic accuracy. The same principle applies to AI.
1. The vague complaint
"I've been feeling really tired lately" gives AI nothing to anchor on.
"For the past three weeks, I've been waking up exhausted even after eight hours of sleep. My concentration at work has dropped and I don't feel like exercising like I normally do. This started right after I changed my sleep schedule for a project deadline."
Same symptom. But now AI has timing (three weeks), severity (affecting work and exercise), and context (schedule change). Those are hooks it can use to narrow possibilities and ask useful follow-up questions.
2. Confirmation-seeking questions
"Could this be a thyroid problem? I read that exhaustion is a sign of thyroid issues."
When you lead with a hypothesis, AI tends to engage with that hypothesis — sometimes agreeing too readily, sometimes focusing on it to the exclusion of other possibilities. Research on anchoring bias in BMJ Quality & Safety documents how initial framing affects diagnostic reasoning, and AI systems are susceptible to the same pattern.
Better: "I've been exhausted for three weeks. What are the most common causes I should consider, and what patterns would distinguish between them?" This asks for breadth first, investigation second.
3. Catastrophizing the frame
"I'm always tired. I probably have some serious disease."
Catastrophic framing pushes AI toward either reassurance or alarm, neither of which is useful for investigation.
"I've been tired for three weeks, which is unusual for me. I want to understand what the most common causes are and what I should monitor to know if this needs a doctor's attention."
Same concern, framed as a practical investigation rather than an emotional emergency.
4. Missing context
"Why do I feel sick sometimes?"
AI can't reverse-engineer a pattern from nothing. Context includes timing, duration, frequency, what makes it better or worse, what was different before it began, and associated symptoms.
"I get nauseous and dizzy in the afternoon, usually around 2-3 PM. It lasts about 30 minutes. Looking at my logs, it seems to happen on days I skip lunch or eat a high-carb breakfast. On regular-meal days, I'm fine."
Now AI has concrete material to investigate.
5. Asking for diagnosis
"I have joint pain and fatigue. What do I have?"
AI can't and shouldn't diagnose you. It can help you understand what factors distinguish one possibility from another, what questions to ask your provider, and how to organize your information for clinical evaluation.
"I've had joint pain in my knees and hips for two months, along with more fatigue than usual. I want to understand what patterns or tests would help narrow this down, and what questions I should ask my doctor."
That framing gets you a structured investigation plan rather than a speculative diagnosis.
The common thread
All five pitfalls give AI too little structure to produce useful output. Specificity, openness, and clear intent transform AI from generic information source into investigation partner.
References
- The medical interview: the three function approach — Annals of Internal Medicine, 1990. Communication structure and diagnostic accuracy.
- Anchoring bias in clinical judgment — BMJ Quality & Safety, 2015. Initial framing affecting diagnostic reasoning.
- Patient-centered communication and diagnostic accuracy — JAMA, 2006. Specificity in symptom reporting improving outcomes.
AI analyzes your recent conversations to identify framing patterns that might be limiting the quality of responses you get.