Manuscript Title:

EXPLORING HALLUCINATIONS IN LARGE LANGUAGE MODELS (LLMs): A SYSTEMATIC REVIEW OF TYPOLOGIES, ORIGINS, AND MITIGATION APPROACHES

Author:

AMAL ALTALHI, ITAMAR E. SHABTAI

DOI Number:

DOI:10.5281/zenodo.18998811

Published : 2026-03-10

About the author(s)

1. AMAL ALTALHI - Center of Information Systems and Technology, Claremont Graduate University, Claremont, CA, USA. & Department of Management Information Systems, Faculty of Business Administration, University of Tabuk, 71491 Saudi Arabia.
2. ITAMAR E. SHABTAI - Center of Information Systems and Technology, Claremont Graduate University, Claremont, CA, 91711, USA.

Full Text : PDF

Abstract

This systematic literature review pull out the understanding of large language models (LLMs) by thoroughly examining hallucination situations, including the types, causes, and reduce approaches to enhance LLM usefulness in natural language processing (NLP). Electronic databases (Web of Science, IEEE Xplore, Open Review, Google Scholar) were queried by a comprehensive search, generating 1136 records. Of these, 27 met the inclusion criteria and were included. A meta-aggregative approach was used to analyze and synthesize the articles. The research questions formed significant themes for organizing the findings and results section. LLMs ordinary taxonomy includes fact hallucination, honesty hallucination, lack of alignment, conflict in ideas, nonsensical hallucination, random hallucination, object hallucination, and intrinsic and external hallucination. Hallucination causes were training data issues, model limitation/overfitting, limited context window/ knowledge cutoff, and nuanced language understanding. Effective mitigative approaches were domain-specific fine-tuning, prompting, model reprogramming, and grounding. 


Keywords

Artificial Intelligence, Generative AI, Hallucination, Large Language Model, LLMs, Fine-Tuning, Overfitting, Object Hallucination, Prompting, Grounding.