LREC 2026 Tutorial:
Hallucination in Large Foundation Models

1AIISC, 2Apple, 3BITS

May 12, 2026 Morning

About this tutorial

Large Foundation Models (LFMs) have advanced significantly in generating human-like text, but their tendency to hallucinate - producing incorrect or fabricated information - remains a critical challenge. This tutorial offers an in-depth look at hallucinations in LFMs, introducing key concepts and issues within this area. We will explore various types of hallucinations, including Factual Mirage and Silver Lining, and present cutting-edge methods for benchmarking, detection, and mitigation. Understanding hallucination is especially important in a multimodal context, as Vision-Language Models (VLMs) can worsen the problem by combining hallucinated text with misleading images or video. We'll also address code hallucination in LFMs. The tutorial provides practical techniques for minimizing hallucinations through both black-box and gray-box approaches. Tailored for researchers and professionals in generative AI, this session bridges the gap between emerging research and practical solutions, offering participants valuable insights and tools to improve the factual accuracy of LFM outputs. Attendees will gain a deeper understanding of the complexities around LFM hallucination and learn strategies to drive future advancements in the field.

BibTeX

@article{ hallucination-llm-tutorial,
  author    = { Rawte, Vipula and Chadha, Aman and Sheth, Amit and	Das, Amitava },
  title     = { LREC 2026 Tutorial: Hallucination in Large Foundation Models },
  journal   = { LREC 2026 },
  year      = { 2026 },
}