AI and supplements — why this is where AI is most confidently wrong
AI and supplements — why this is where AI is most confidently wrong
Ask AI about supplements and you'll get an answer that sounds like it came from a knowledgeable nutritionist. Clear recommendations, specific dosages, mechanistic explanations for how each compound works. It's fluent, organized, and often wrong.
This isn't a generic "AI makes mistakes" warning. Supplements are a category where AI fails in specific, predictable ways — and where those failures carry real health risk.
Why the evidence base is a mess
AI is only as good as its training data, and the supplement evidence base is unusually unreliable. Most supplement research consists of small, short-term studies with inconsistent methodologies. Dosages vary between studies. Formulations differ. Outcome measures aren't standardized. A "positive result" for magnesium in one study might use a completely different form, dose, and duration than another.
This means AI is pattern-matching across contradictory, low-quality data and producing confident-sounding summaries. A 2025 evaluation of AI-powered supplement recommendations found accuracy rates between 31% and 36% — none exceeding 40%. Worse, 73% of the citations AI provided came from unverified or non-authoritative sources, including unregulated health websites.
When the underlying evidence is messy, AI doesn't get cautious. It gets creative.
The credibility illusion
AI presents supplement information in the same authoritative format it uses for well-established medical facts. "Magnesium glycinate at 400mg may help reduce migraine frequency" sounds like settled science. It's not — but the formatting, the specificity of the dosage, and the confident tone create what researchers call a "credibility illusion." You trust it because it sounds like it knows.
This is particularly dangerous with supplements because the industry itself is largely unregulated. In the US, supplements don't require FDA approval before sale. Claims don't need to be verified. And the marketing language — "clinically studied," "doctor recommended," "evidence-based formula" — is designed to sound medical without meeting medical standards. AI absorbs all of this during training and reproduces it without the critical filter a trained clinician would apply.
Where it gets dangerous
The real risk isn't that AI recommends an ineffective supplement. It's that AI doesn't account for your specific situation.
Drug interactions. This is the biggest concern. St. John's Wort interacts with antidepressants, birth control, and blood thinners. Turmeric affects blood clotting. Magnesium interacts with certain antibiotics. AI might recommend any of these without asking what medications you're taking — or worse, acknowledge your medications and still miss the interaction because the training data didn't consistently flag it.
Condition-specific risks. Iron supplementation is helpful for iron-deficiency anemia and potentially harmful for people with hemochromatosis. High-dose vitamin D can cause toxicity. Certain B vitamins can mask symptoms of serious deficiencies in other areas. The right supplement for one person is the wrong supplement for another, and AI is bad at making that distinction reliably.
Replacing investigation with supplementation. This is the subtlest risk. If you're fatigued and AI suggests B12, iron, and vitamin D, you might spend months supplementing when a simple blood panel would have identified the actual issue — or ruled supplements out entirely. Supplements become a way to feel like you're doing something without actually investigating the problem.
How to evaluate any supplement suggestion
Whether it comes from AI, a blog, a friend, or a health food store employee — the same questions apply.
What's the specific evidence? Not "studies show" but which study, what population, what dose, what outcome. If the answer is vague, the evidence probably is too. Is it relevant to your situation? A study in postmenopausal women doesn't necessarily apply to a 30-year-old man. Context matters more than headlines. Have you checked for interactions? Your pharmacist is a better resource for this than any AI. They have access to interaction databases and they know your medication list. Could a test answer this faster? If you suspect low iron, a blood test costs less than three months of supplements and gives you an actual answer.
How Iris handles supplement questions
Iris will discuss supplements if you ask, but it treats them the way it treats any health intervention — as something to investigate, not to prescribe. If you're tracking fatigue and ask about iron supplementation, Iris is more likely to suggest checking your ferritin levels first than to recommend a specific product.
Iris won't recommend specific supplement brands, dosages, or protocols as though they're medical advice. It can explain what the evidence says (and how strong that evidence is), flag potential interactions with medications in your health record, and help you formulate questions for your provider. But the decision to take a supplement — especially alongside medications — belongs to you and your healthcare team.
References
- Evaluating the Reliability and Accuracy of an AI-Powered Search Engine in Providing Responses on Dietary Supplements — PMC, 2025. AI accuracy of 31-36% on supplement recommendations, with 73% of citations from unverified sources.
- The Illusion of Safety: A Report to the FDA on AI Healthcare Product Approvals — PMC, 2025. How AI wellness products escape regulatory scrutiny.
- Unregulated Emotional Risks of AI Wellness Apps — Harvard Business School, 2025. Analysis of unregulated AI wellness industry and credibility illusion.
- AI Chatbots Can Run With Medical Misinformation — Mount Sinai, 2025. How AI propagates health misinformation based on source formatting.