Embracing AI in L&D: How Will My Role Change?
How to Transform Yourself by Embracing AI in L&D
In my previous article, we began to explore the lessons learned from the conference on how learning professionals can prepare for the changes brought by Artificial Intelligence (AI) and automation in the near future. This article continues with the following five calls to action for adopting AI in L&D, and also tries to answer a common question about Large-Scale Language Models (LLMs): how intelligent are they in reasoning?
Key Takeaways for Adopting AI in L&D
Here are some takeaways from speaking with industry leaders today at the conference:
1. Develop a Strong Understanding of Behavioral Sciences
- Behavior change research models
Get familiar with models like COM-B (ability, opportunity, motivation—behavior), self-determination theory, and Fogg’s behavioral model to understand what drives learning motivation and engagement. Ultimately, your goal is behavior change, not just information retention. - Motivational design
Use insights from these models to create learning experiences that promote students’ independence, competence, and connectedness, increasing the likelihood of sustained behavior change. - Check and adapt
Continually explore different strategies to motivate and engage students, then adapt based on what feels best. Rate the right items! You have to go beyond the level 1 surveys and the “knowledge test” at the end of the course. For example, by shifting your focus from hindsight (satisfaction with content) to prediction (behavioral drivers such as motivation, opportunity, job skills, and goal achievement), you can gain more actionable insights after the learning experience, which you and your stakeholders know about. head on.
2. Build a Network
- Follow industry experts (internal and external)
Follow industry leaders in L&D, AI, and future work trends. Choose wisely. You’ll find a whole range of people on the scale from “AI will solve all problems” to “AI will destroy the world” when it comes to embracing AI in L&D. Don’t create echo chambers where everyone says the same thing. Find practitioners who are actually running projects, not just blogging about AI using AI. Reading insights from experts helps you stay informed and inspired by emerging trends. There is a lot of noise in the playground today. Let industry leaders cut through the noise and sift through the dust. If not, you will be frustrated. - Join L&D communities
Get involved in communities like LinkedIn groups, conferences, and forums. Networking with other professionals can provide new ideas and new solutions. But don’t just stay in the L&D bubble! See the next point. - Go beyond L&D and HR
Find champions within the company. Also, AI will be used somewhere first, which will have a direct impact on business goals. Be busy. Learn from early mistakes.
3. Focus on Building “Learning” Ecosystems, Not Just Programs
- Think beyond the curriculum
By “learning,” I don’t just mean LMSs or LXPs, but anything dedicated to training. Anything that enables, accelerates, and scales your employees’ ability to do their job is learning. Create ecosystems that support continuous, informal learning, and community engagement. Try using chatbots, forums, or peer coaching to foster a culture of learning in the workflow. But, again, know when to step out of the way! - Use technology to integrate learning and work programs
No one is happy logging into their LMS or LXP. No one will search LMS or LXP for how to do things later. Yes, AI is now included in all learning technology applications, but it is fragmented and often wraps around the Large Language Model. Integrate learning and operational systems (where employees work) in the background (using programming interfaces or APIs). We don’t need to know where the goods are stored; we just need to be able to access them. Learning technology is any technology that supports learning. Build your alliances.
4. Strengthen Change Management Skills
- Learn how to manage structures
Familiarize yourself with frameworks such as ADKAR (awareness, desire, knowledge, ability, reinforcement) or Kotter’s 8-step change model, and behavioral motivation. - Look for resistance to change
Develop strategies to overcome resistance by understanding employee concerns and demonstrating the long-term value of new learning methods. Your use of the AI (at least for now) depends on the kill. Everyone wants change, but no one wants to change. Start by solving specific problems for your stakeholders and target audience. Start small, drive, and scale from there with repetition. Bring doubters together as testers! They will be more than happy to try to break the application and point out the errors.
5. Understand Data Security, Data Privacy, and Ethics
- Build the foundations
Do you have a data privacy council today? If not, start building. Find out who owns data security in your organization. Partner in clear guidance on data classification standards: what kind of data can be used there. Understand your vendors’ data security and data privacy policies. You may or may not own the data. You can own the data after partitioning it, but you need to archive it. You need clear policies on how long you keep data, and where and how it is stored (encrypting both in transit and at rest). Be clear about what data you collect and what that data can be used for. (For example, if you collect skills data to implement self-improvement programs, might someone later decide to use this data to analyze performance?)
How Smart Are LLMs, Anyway?
Finally, one of the most interesting questions I received from conference attendees was how smart the current LLMs are. Are they good at reasoning or delusional thinking? How much can we rely on them to think, especially when we create solutions that directly connect AI (LLMs) with the audience?
LLMs are trained on large data sets to learn patterns, which they use to predict what’s next. By oversimplifying something, you take all the data you’ve collected and split it into training data and test data sets. You train your AI model on a training data set. Once you think they’re good at pattern recognition, you test it on test data they haven’t seen yet. It’s more complicated than that, but the point is that “wisdom” and reasoning can be misinterpreted to see a pattern.
What is an example? Let’s say you trained your model on how to solve mathematical problems. When the model sees a problem, it follows a learned pattern of how to solve it. It has no opinion, belief, or any kind of fundamental stance on this. That’s why if you just tell the model that it’s wrong, it apologizes and revises the answer. Mathematical reasoning (as of today) is not their bright spot.
Research on all the models obtained through the GSM-Symbolic test showed that generating versions of the same mathematical problem by substituting certain elements (such as words, roles, or numbers) can lead to model inconsistency, indicating that problem solving occurs in a pattern. recognition instead of thinking [1].
Specifically, the performance of all models decreases when only the numeric values in the query are changed in the GSM-Symbolic benchmark.
If you add seemingly important information to a problem that isn’t really important, people will, logically, just ignore it. LLMs seem to try to integrate new knowledge even if it is not necessary for thinking, as the study found:
Adding a single seemingly important clause to a query causes a significant performance drop (up to 65%) for all high-level models, even though the clause does not contribute to the chain of reasoning required to find the final answer.
In short, today’s LLMs are amazing at pattern recognition, reaching a speed and scale that no one else can match. They are great at impersonating a soft skill! But they have their limitations (like today) in mathematical reasoning, especially in showing why an answer is an answer. However, new models, such as OpenAI’s Strawberry one, are trying to change this [2].
References:
[1] GSM-Symbolic: Understanding the Limits of Mathematical Reasoning in Large Language Models [2] Something New: On “Strawberry” for OpenAI and ConsultingSource link