NLP
Intermediate 3 min read
Hallucination in LLMs: What It Is and How to Reduce It
When confident AI is confidently wrong
AI Academy
AI Engineer
LLM hallucination occurs when the model generates plausible-sounding but factually incorrect text — it optimizes for fluency, not truth. Reduce it by grounding responses with RAG, asking the model to cite sources, using lower temperature for factual tasks, and adding a verification step. For high-stakes domains, always pair LLM output with human review or deterministic fact-checking.
#nlp
#llm
#hallucination
#reliability