AI in eLearning: Hallucinations, Distortions, and The Need for Human SMEs

AI in elearning isn't your sme

Artificial intelligence is rapidly transforming Learning Experience Design (LxD). From drafting course outlines to generating quiz banks and even creating personalized learning paths, AI tools can accelerate development in ways that save time and resources. But here’s the catch: while AI in elearning can act like an assistant, it cannot be the Subject Matter Expert (SME). That role still firmly belongs to humans.

Where AI Falls Short

Despite its strengths, AI isn’t foolproof, and in Learning Experience Design, accuracy is non-negotiable. Common pitfalls include:

Imagine a cybersecurity course where AI generates a scenario about phishing emails. On the surface, the example looks correct, but the terminology is slightly off, and the response steps don’t align with the organization’s internal IT protocols. If a learner were to follow those instructions in the real world, the results could be costly.

Why These Mistakes Happen

These issues aren’t random; they’re built into how large language models (LLMs) work. LLMs are trained on large amounts of data to predict patterns in text, not to verify facts. That means when you ask for a regulation, definition, or citation, the AI generates something that looks right based on patterns it has seen before, but it may not actually know if it’s true.

Researchers call this a hallucination: the model produces information that sounds plausible but may be fabricated. This happens because:

In other words, AI is excellent at sounding correct, but without human oversight, it could potentially lead to incorrect or misleading information making its way into your course content. Within Learning Experience Design, where learners depend on and expect accuracy to build real-world skills, misleading information is not an option.

Humans are the SMEs

Human oversight is non-negotiable. LxDs, internal review processes, and actual subject matter experts are the key to ensuring content is accurate, relevant, and contextually appropriate. As a Learning Experience Designer, you have the obligation to:

AI can provide a draft, but only a human can validate that the knowledge is correct, applicable, and meaningful.

Checking Visual and Auditory Content

Multimedia introduces new failure modes. Visual and audio assets generated or enhanced by AI need careful human review.

Visual (images and videos):

Audio (narration, AI voices, sound cues):

Practical checks for multimedia

The Balanced Approach

The most effective use of AI in eLearning is to let it do the heavy, monotonous lifting, such as brainstorming examples, structuring outlines, or generating rough drafts. It’s our job as humans to remain curious and rigorous in refining and validating the accuracy of that content.

This blended approach maximizes efficiency without sacrificing accuracy. Learners benefit from engaging, up-to-date, and contextually relevant training while organizations benefit from faster development cycles that don’t compromise on quality or trust.

By combining AI’s speed with human insight, organizations can unlock the full potential of eLearning: experiences that are efficient, credible, meaningful, and effective in driving real-world performance.

At Apti, we believe in the power of this blended approach. We combine the speed of AI with the irreplaceable expertise of human insight to create learning experiences that are not just fast, but also credible, meaningful, and effective. If you’re ready to explore how to leverage AI responsibly to drive real-world performance in your organization, let’s connect.

Want to be notified about new posts?
Subscribe to our mailing list!