
As tools like Articulate Rise, Articulate Storyline, and ChatGPT increasingly weave AI into the fabric of learning design, it’s become harder to tell where the human stops and the algorithm begins. This lack of clarity in the use of AI in eLearning can raise questions for learners about credibility, authorship, and control. For designers, it presents an ethical challenge, and a professional opportunity.
This article explores how AI shapes learning experiences behind the scenes, why transparency matters, and how designers can communicate AI’s role effectively without disrupting the learner journey.
Where AI Shows Up in LxD
AI is already influencing how we design and deliver content, whether learners realize it or not. Its contributions range from overt to invisible, with each posing unique implications for transparency and trust.
Visible Use of AI in eLearning
Designers often rely on AI for:
- Scripting and drafting content
- Writing learning objectives
- Generating visuals and voice over
- Research and outlining modules
Tools like Rise’s built-in AI tools have made these uses increasingly accessible, with results that may potentially be easy to identify, especially for learners familiar with AI’s tone and structure.
Behind-the-Scenes Use of AI in eLearning
Some of AI’s most powerful effects happen quietly:
- Adaptive assessments based on learner performance
- Dynamic feedback tailored by automated analysis
- Reordered modules or recommended content based on past behavior
For AI-powered adaptive modules, learners may notice their course path shift after a quiz without any context provided. Without disclosure, they may assume the platform is responding arbitrarily, which could potentially affect the learner’s performance.
Why Transparency About AI Matters
LXDs are more than content creators, we’re curators of knowledge which requires us to be trustworthy and credible. Failing to disclose AI’s role may potentially undermine the learning experience, so it’s important to cook up a game plan about how to address AI’s role within your work to the learner.
Supports Learner Agency
Learners who understand how and why their experience is being shaped are more likely to engage. Research in learning science shows that transparency and agency may be supporting retention. Clarity empowers users to feel in control of their learning journey—even when it’s being optimized for them.
Enhances Credibility
Most learners can now recognize AI-generated content. When AI contributions are presented as original human work, it risks impacting the learner’s perception of the integrity of the course. Clear attribution reassures learners that content was intentionally designed and responsibly reviewed.
Promotes Ethical Design
Credit matters. Just as we cite research or consult SMEs, disclosing the use of AI in eLearning respects intellectual transparency and avoids misrepresenting authorship, particularly in academic or compliance-driven contexts.
How to Communicate AI’s Role
Transparency should feel intentional and LXDs can incorporate small but strategic disclosures that inform learners without overwhelming them. If visuals, scripts, or assessments were generated with the help of a tool like ChatGPT, designers can acknowledge that tactfully within the course. For example you could say:
“Portions of this module were created using AI tools and reviewed by instructional designers.”
“We built this module with generative AI for initial drafts. Subject matter experts reviewed, fact-checked, and edited all content before publishing.”
Here’s some additional tips for disclosure:
1. Use Direct, Jargon-Free Language
Depending on your audience (corporate learners vs. academic learners are going to require different approaches) avoid overly technical explanations. Instead, focus on simple statements that explain what AI did:
“We generated this feedback with an AI tool that reviewed your quiz responses.”
“We updated your learning path using insights from your recent performance.”
2. Integrate Touchpoints into the Experience
Choose moments where it feels natural and layer them where they make the most sense:
- Sidebar notes during AI-generated activities
- Intro slides or tool tips in adaptive modules
- Completion messages highlighting how content was selected or tailored
Example:
“You may have noticed a new module appeared after Quiz 2. Our system used your performance data to help guide you toward areas where you can grow.”
3. Visual Cues
For learners skimming content, consider icons or hover-over labels to identify AI-generated sections. These micro-interactions promote transparency without derailing the flow.
Final Thoughts
The use of AI in eLearning is here to stay, and its influence in learning design will only grow. Designers should openly acknowledge when and how they’re using it to protect credibility, build learner trust, and strengthen the overall learning experience.
As AI becomes a more prominent tool in the learning designer’s toolkit, let’s focus on crediting it thoughtfully and intentionally, with respect to the learner’s experience. Not everything needs a neon label, but learners and stakeholders can benefit when they understand the mechanics behind what’s shaping their learning experience.
At Apti, we’re exploring ways to support the integration of AI transparently and ethically within the realm of Learning Experience Design. Interested in learning how else we’re putting learner trust at the forefront of our work? Contact us today!
We developed this article with the assistance of ChatGPT. A human editor revised and reviewed it to ensure clarity, accuracy, and alignment with ethical LxD best practices.