
Artificial intelligence is rapidly transforming Learning Experience Design (LxD). From drafting course outlines to generating quiz banks and even creating personalized learning paths, AI tools can accelerate development in ways that save time and resources. But here’s the catch: while AI in elearning can act like an assistant, it cannot be the Subject Matter Expert (SME). That role still firmly belongs to humans.
Where AI Falls Short
Despite its strengths, AI isn’t foolproof, and in Learning Experience Design, accuracy is non-negotiable. Common pitfalls include:
- Outdated or fabricated facts: AI often cites sources that don’t exist, and might reference regulations that have been updated since the AI was trained. In compliance training, that’s a serious risk.
- Contextual misalignment: An AI-generated example of customer service might reference a restaurant when your organization is a healthcare provider. The example sounds fine, but misses the opportunity to connect with the learner’s reality.
- Surface-level learning: AI can create quiz questions, but they may only test recall rather than the higher-level problem-solving skills that are actually tied to performance outcomes.
- Bias and tone issues: Without oversight, AI-generated text may unintentionally reinforce stereotypes or use language that is either inappropriate or isn’t culturally sensitive to your audience.
Imagine a cybersecurity course where AI generates a scenario about phishing emails. On the surface, the example looks correct, but the terminology is slightly off, and the response steps don’t align with the organization’s internal IT protocols. If a learner were to follow those instructions in the real world, the results could be costly.
Why These Mistakes Happen
These issues aren’t random; they’re built into how large language models (LLMs) work. LLMs are trained on large amounts of data to predict patterns in text, not to verify facts. That means when you ask for a regulation, definition, or citation, the AI generates something that looks right based on patterns it has seen before, but it may not actually know if it’s true.
Researchers call this a hallucination: the model produces information that sounds plausible but may be fabricated. This happens because:
- The training data may be incomplete or outdated, so the AI “fills in the gaps” with guesses.
- It has no built-in fact-checking mechanism unless connected to live databases.
- It’s very good at mimicking the style of reliable sources, which can make false information look convincing. For example, citations have a relatively consistent structure that LLMs find easy to replicate, regardless of whether or not that information is actually true.
In other words, AI is excellent at sounding correct, but without human oversight, it could potentially lead to incorrect or misleading information making its way into your course content. Within Learning Experience Design, where learners depend on and expect accuracy to build real-world skills, misleading information is not an option.
Humans are the SMEs
Human oversight is non-negotiable. LxDs, internal review processes, and actual subject matter experts are the key to ensuring content is accurate, relevant, and contextually appropriate. As a Learning Experience Designer, you have the obligation to:
- Fact-check AI outputs against reliable, up-to-date resources.
- Align material with organizational needs, such as actual company policies or industry practices.
- Shape clarity and tone so that the training feels approachable and contextually appropriate.
- Confirm learning objectives are met, ensuring that learners gain information and actionable skills that are directly relevant to the work they do.
AI can provide a draft, but only a human can validate that the knowledge is correct, applicable, and meaningful.
Checking Visual and Auditory Content
Multimedia introduces new failure modes. Visual and audio assets generated or enhanced by AI need careful human review.
Visual (images and videos):
- Common AI artifacts: odd textures, distorted hands or faces, incorrectly rendered text, misplaced logos, or inconsistent branding. These reduce credibility and can confuse learners.
- Text in images: AI often generates text that looks plausible but is garbled or inaccurate. Never rely on embedded image text for safety-critical instructions.
- Context fit: Ensure imagery reflects the learner population and industry (e.g., correct PPE in a manufacturing course).
- Accessibility: Check alt text, color contrast, font sizes, and whether images add meaningful value (not just decorative).
- Copyright & IP: Confirm imagery does not infringe trademarks or depict identifiable people without consent.
Audio (narration, AI voices, sound cues):
- Pronunciation & prosody: AI voices can mispronounce names, acronyms, or jargon, or use a monotone/incorrect emphasis that changes meaning.
- Tone & emotion: Delivery must match the content. Sensitive topics need measured tone, and compliance warnings need authority and clarity.
- Transcripts & captions: Always provide accurate transcripts and captions. AI-generated captions need human correction, especially for technical terms.
- Accessibility: Confirm captions are synced, readable, and follow reading order for screen readers.
Practical checks for multimedia
- Play assets on multiple devices and platforms.
- Verify all on-screen text against the source script.
- Listen for mispronunciations and unnatural pauses; correct via re-prompting or re-recording.
- Have an accessibility reviewer confirm color contrast, alt text, and caption accuracy.
The Balanced Approach
The most effective use of AI in eLearning is to let it do the heavy, monotonous lifting, such as brainstorming examples, structuring outlines, or generating rough drafts. It’s our job as humans to remain curious and rigorous in refining and validating the accuracy of that content.
This blended approach maximizes efficiency without sacrificing accuracy. Learners benefit from engaging, up-to-date, and contextually relevant training while organizations benefit from faster development cycles that don’t compromise on quality or trust.
By combining AI’s speed with human insight, organizations can unlock the full potential of eLearning: experiences that are efficient, credible, meaningful, and effective in driving real-world performance.
At Apti, we believe in the power of this blended approach. We combine the speed of AI with the irreplaceable expertise of human insight to create learning experiences that are not just fast, but also credible, meaningful, and effective. If you’re ready to explore how to leverage AI responsibly to drive real-world performance in your organization, let’s connect.