Iris360
GuidesLogin
How AI Thinks

What AI pays attention to — and why what's loudest isn't always what matters

How AI Thinks
getting-startedverification

What AI pays attention to — and why what's loudest isn't always what matters

You're talking to Iris about your fatigue. You mention a headache in passing — "also had a headache yesterday, but that's not the main thing." For the rest of the conversation, Iris keeps circling back to headaches. Your fatigue question gets shallower answers. The headache — which you flagged as secondary — is now driving the analysis.

This isn't a bug. It's how AI attention works, and understanding it makes you dramatically more effective at directing your investigation.

AI focuses on what's in front of it

Large language models process text by assigning attention weights — how much importance each piece of information gets when generating a response. Research on transformer attention mechanisms has shown that models heavily weight recent and prominent information. Whatever you said most recently, most vividly, or with the most detail gets the most influence over the response.

This creates a practical problem: the thing you described most thoroughly isn't necessarily the thing that matters most. If you spent three sentences on your headache and one on your fatigue, the model treats the headache as the primary concern — not because it reasoned about clinical importance, but because it occupied more of the input.

The same principle applies to what Iris loads from your notes. If your migraine notes are detailed and your sleep notes are sparse, the model's analysis will tilt toward migraines even if sleep is the actual driver. The information that's most available gets the most weight, regardless of whether it's most important.

How AI gathers extra context

When you ask Iris a question, the model doesn't just work with what you typed. Before responding, it loads information from several sources: your system notes (Identity, Current Focus), any topic notes it decides are relevant, recent entries if the question involves your data, and the full conversation history.

This loading process is selective. Iris browses your note directories, reads summaries, and decides what to pull in. It's making judgment calls about relevance — and those calls aren't always right. It might load your gut investigation notes when you're actually asking about energy levels, because you mentioned food in passing.

Understanding this helps you take control. When you tell Iris "load my sleep notes and my fatigue entries from the last month," you're overriding the model's guesswork with your knowledge of what actually matters. You're directing the attention rather than hoping it lands in the right place.

Recency bias: the last thing you said matters most

AI models give disproportionate weight to the most recent messages in a conversation. This is a documented phenomenon in transformer architecture — the final positions in the input sequence receive stronger attention. Research on positional bias in language models, published by researchers at Google DeepMind, found that models consistently prioritize information at the end of the context over information in the middle.

For health conversations, this means: the question you ask last shapes the response more than the detailed history you provided earlier. If you spend ten messages carefully describing your sleep patterns, stress levels, and activity data — then end with "also, do you think I should try magnesium?" — the response will center on magnesium. The model is answering the last question, not synthesizing everything you said.

Practical fix: When you need a comprehensive analysis, make your final message explicitly request synthesis. "Based on everything I've described — sleep, stress, activity, and diet — what are the strongest correlations with my fatigue?" forces the model to integrate rather than fixate on the last detail mentioned.

The negative space problem

This is the most underappreciated skill in working with AI for health: telling it what isn't happening.

When you describe symptoms, you naturally focus on what's present — "I'm tired, I have brain fog, I sleep poorly." This gives the model a set of positives to work with. But diagnosis and pattern analysis depend equally on negatives — what symptoms are absent, what you've already ruled out, what doesn't fit the pattern.

"I'm tired all the time" gives AI a generic starting point. "I'm tired all the time, but I don't have muscle pain, my appetite is normal, I don't crash after exercise, and it doesn't get worse before my period" eliminates entire categories. Fibromyalgia becomes less likely (no muscle pain). Post-exertional malaise is off the table (no exercise crashes). Hormonal cycling isn't the driver (no menstrual pattern).

Every negative you provide is a branch the AI can prune from its analysis. Without negatives, it considers everything. With them, it focuses.

This applies to tracked data too. When reviewing patterns with Iris, mention what you expected to correlate but didn't. "I assumed dairy was triggering my symptoms, but I tracked strictly for three weeks and there's no pattern." That negative finding is data — it redirects the investigation away from a dead end.

Anchoring: the first framing sticks

The way you first describe a problem shapes how AI thinks about it for the rest of the conversation. If you open with "I think I have a thyroid problem," the model anchors on thyroid. Every subsequent analysis tilts in that direction — it looks for thyroid-consistent patterns, suggests thyroid tests, interprets ambiguous data through a thyroid lens.

This is anchoring bias, and it's well-documented in both human and AI reasoning. Research on anchoring in clinical decision-making, published in Academic Medicine, found that initial diagnostic impressions strongly bias subsequent reasoning — even when contradictory evidence is presented later.

Practical fix: Start with observations, not theories. "I've been exhausted for three months, my sleep is long but unrefreshing, and it's worse in the afternoon" gives the model a set of observations to reason about. "I think my thyroid is off" gives it a conclusion to confirm. The first approach leads to genuine analysis. The second leads to confirmation bias.

Working with the system, not against it

Once you understand how AI prioritizes information, you can structure your conversations to get better results.

Lead with what matters most. Put your primary concern first, with the most detail. Secondary concerns should be clearly flagged as secondary: "I also want to mention X, but the main thing I'm investigating is Y."

Provide negatives alongside positives. For every symptom you describe, mention one or two relevant things that aren't happening. "Fatigue but no pain. Poor sleep but not difficulty falling asleep — I sleep long but wake unrested."

Direct the loading. Tell Iris which notes and entries to look at rather than letting it guess. "For this conversation, I need my sleep data, stress logs, and energy ratings from the last three weeks."

End with the real question. Whatever you ask last gets the most weight. If you've been describing symptoms for ten messages, make your final message the synthesis request: "Given all of this, what patterns do you see and what should I investigate first?"

Restate when it matters. If you're deep into a conversation and about to ask something important, briefly restate the key context. "Remember that I'm on metformin and my TSH was normal — given that, what would you suggest?" puts the critical facts right next to the question.

References

  1. Attention is all you need — NeurIPS, 2017. Foundational paper on transformer attention mechanisms.
  2. Lost in the middle: how language models use long contexts — Stanford NLP, 2023. Positional bias in language model attention.
  3. Anchoring bias in clinical reasoning — Academic Medicine, 2005. Initial framing biasing subsequent diagnostic reasoning.
  4. Calibration of language models — Nature, 2023. How AI models weight and prioritize information in context.
What AI pays attention to — and why what's loudest isn't always what matters — Iris360 Guide