Why most AI training doesn't stick
and what actually works.
Hamza Hussain
22 April 2026
Last year, Gartner surveyed 114 HR leaders. 88 percent said their organisations had not realised significant business value from AI tools.
Not because they hadn't tried. Because what they did, buying tools, running demos, sending people on courses, doesn't produce the outcome they were expecting.
The problem is not AI. The problem is how the training is delivered.
The forgetting curve is working against every training session
In the 1880s, Hermann Ebbinghaus ran a series of experiments on memory and retention. His finding, now called the forgetting curve, showed that within 24 hours of learning something new, people forget roughly 70 percent of it. By the end of the week, 90 percent is gone.
This was discovered over a century ago. Corporate training has largely ignored it.
The typical format, a two-hour workshop, a recorded Zoom, a LinkedIn Learning module, delivers information once, in a single sitting, with no requirement to apply it immediately. Completion is the metric. Transfer is not.
Self-paced online courses complete at 10 to 15 percent on average. Even the ones that do get finished: most of what was learned is gone before Monday morning.
Generic training creates a specific problem
AI training compounds this further, because AI is inherently generic until you make it specific.
A course that teaches someone to write better prompts is teaching a skill in the abstract. The moment that person goes back to their job, their inbox, their hiring process, their way of structuring notes, they face a translation problem. "How does this actually apply to what I do?"
Most people cannot bridge that gap on their own. So they don't. The tool sits unused. Productivity stays flat.
This is what researchers call the knowing-doing gap. People understand what AI can do. They don't know what it can do for them, in their specific role, with their specific tasks. Nobody taught them that part.
What live, hands-on training actually changes
Research on training transfer consistently shows that application in context is the most critical variable. Not how engaging the content was. Not how much people liked the trainer. Whether they were able to practise in an environment that matched where they'd actually use the skill.
Learners in interactive environments, where they're building and doing rather than just watching, retain around 75 percent of what they learn. The gap between 75 percent retention and 15 percent completion is not a motivation gap. It is a structure gap.
Live training closes it because the learner builds something real during the session. They leave with a workflow they ran themselves, on their own tasks, using their own data. There is no translation step.
The 48-hour window
There is a narrow window after any training session where the chance of application is highest. If someone does not use what they learned within 48 hours of learning it, the forgetting curve has already done most of its damage.
This is why we end every session with a skill file, a structured workflow participants open in Claude and run themselves before the session ends. The session creates the first application. The skill file removes the barrier to running it again.
Getting people to use something once, in the room, with support, is the mechanism. Everything after that is reinforcement.
The structure is the problem. Not the team.
If your team has been through AI training before and it did not stick, that outcome is expected given how that training was structured. It is not a reflection of the team's capability or motivation.
The question is not whether AI can save your team time. It can. The question is whether the training was built around how your team actually works, their tasks, their terminology, their specific friction points, or whether it was built for a generic employee at a generic company.
We have not yet seen a team that could not make meaningful use of AI once the training was built around what they actually do. The capability is not the blocker. The gap between generic knowledge and specific application is.
