The Impact of AI on the Kirkpatrick Model
This article will help you better understand of how Machine Learning (ML) and Natural Language Processing (NLP) work, and how these technologies will improve the ability of learning program leaders to measure the impact of their training solutions.
It helps to start out with some terminology. In the world of ChatGPT and Chatbots, you've inevitably heard the following phrases casually tossed around in meetings. Use this next section as a cheatsheet for these terms so you can effectively identify the right technology or practice for your organization.
Helpful terms used when talking about Augmented Intelligence (AI)
Statistical modeling is the backbone of the toolset. As a class of mathematical modeling, statistical modeling allows for the inclusion of probability: the “what-if” of personalization.
Machine learning is the engine that drives the toolset. The algorithms take the information from the modeling and launch it into the predictive zone.
Deep learning is that predictive zone. Using the ML algorithms, deep learning clusters the data returned and serves it up.
Computational linguistics is the process of interacting. It’s how we make our requests and get our responses.
The following diagram helps provide additional context for these terms. It shows how these four different concepts fit together to form the toolset we use when developing Artificial Intelligence solutions for customer education.
How AI improves the measurement of trained users
As you think about how to apply this technology to your customer education programs, take a look at this recent research from Learning Science Weekly about a study that used AI to help with responses to course reviews. One of the greatest challenges Customer Education teams face when building digital or eLearning courseware has been assessing for comprehension. The most common way of assessing comprehension is to provide “choice” questions - single choice or multiple choice - to gauge comprehension. Certification exams often are choice questions for example.
Even if you want to have alternative types of questions, the challenge of grading scenario answers becomes unscalable for large programs. The introduction of bias or identification of nuance and intent are incredibly hard to train into whoever reviews the exams. Because of these limitations, it has been difficult for learning programs to expand very high on the Kirkpatrick scale, past basic assessment.
Now imagine a world as described in this Barthakur study, summarized by:
Multi-session course on leadership skills
3 to 4 reflective questions per session
Coding scheme (AI) to rate the depth of the reflection answers
The results of the study showed that automation of the review using artificial intelligence tools created a way for the study authors to rate the depth of the reflection included in the responses based on the classification. This is brilliant. It begins to solve one of the issues customer education has as it relates to the Kirkpatrick model – Reaction, Learning, Behavior, Results.
The way it does this is by leveraging advances using these technologies to analyze customer insights through the following approaches (some additional terminology you may find helpful):
Dependency Parsing: This is grammar analysis. It determines the relationship between the words and phrases.
Constituency Parsing: This is syntax analysis. This determines and diagrams the complexity of the words and phrases.
Stemming and Lemmatization: This is understanding how words are formed, their root. It is what trains the system by removing the extras.
Text Classification: This builds the structure. Tagging and categorization create the learning structure.
Cross-Validation: This is self-testing. Cross-validation allows you to test your model and improve the accuracy.
Regular Expressions: This is what you consider important. Regular expressions allow you to define standards for which to search, for example, phone numbers or email addresses, or standard terminology.
Three use cases for AI in Customer Education
The following are ways Learning Program leaders can consider applying AI to their practices. These are technologies available today through third party applications, in some cases with low / no code implementations.
Personalization: Presenting information to the learner in a personalized way, based on roles, geographies, previous searches, recent activities, or any other combination of criteria can create a very tailored experience in digital learning, or catalog review. Predictive tools present an opportunity to leverage recommendation engines. Adding a chatbot or other augmented intelligence to the experience can create a digital concierge to guide the learner through content options.
Federated Search: Presenting a unified interface, federated search provides an opportunity for the learner to see search results from all of the places we store information – knowledge base articles, learning catalogues, support tickets, community discussions or any other sources we might have. Combine this with Digital Adoption Programs (DAP) and we have a powerful way to meet the learner where they are, to present learning in multiple modalities dependent on the starting point of that particular learning journey.
Assessments: In some cases, we pre-test, to properly place students in the curriculum (see personalization). In some cases, we evaluate as we go in the hopes of recording comprehension of the material. And then we close out an item via post-learning assessments, with the hope that the numbers increase in the post-assessment from the pre-test. Many programs will choose to do all three. Text analysis presents an opportunity to revolutionize this process. By utilizing these tools in customer education, we present ourselves with an opportunity to not only advance from choice-based questions, but combine this analysis with the concierge experience to create an elevated learning experience. For me, this is very exciting.
Certifications: There are many rules to follow if you decide to offer certifications on your product or industry. Part of the structure in building a certification program includes two very distinct issues that are varying degrees of important, depending on the organization: (1) building choice-based exams (with or without scenarios), and (2) proctoring exams. The choice-based exam questions fall into the same category as assessments. We get much more power from an AI experience. Microsoft has been doing some outstanding work in this area. Exam proctoring can also leverage that concierge experience by utilizing augmented intelligence to track all of the key points required.
If you want to learn more about AI, check out these sites: