top of page
Search

How do you measure Customer Education?

When developing your Customer Education strategy it is important to measure KPIs across the learner journey, from acquisition of new users to the impact that your program has on adoption and retention.



Estimate the value of your program with the Budget & ROI Calculator



What are the top Customer Education Metrics?

This article reviews the most common Customer Education KPIs used to evaluate course or program effectiveness:


These are listed roughly in the order you would evaluate them across the user journey, from driving awareness of your courses through to the impact those courses have on important Software-as-a-Service (SaaS) metrics like Adoption, Activation, and Retention. There are several different tools used to evaluate these metrics, though as you will see, none of them are particularly complex provided you have set up your data collection properly.


Let’s review these one-by-one…



Site visits by channel

What this answers:

How effective are my marketing or awareness activities at driving new users to my learning resources?


Here’s the measurement:

Count of users who visit your course page / Count of impressions of marketing activity




Too often people forget the importance of treating coursework like a marketing campaign. However incredible the learning experience, it is bound to fail if you don’t have a plan for telling people about it.


Site or page visits by channel is a measurement to help you identify where people are coming from who consume your content and the overall effectiveness of your marketing activities related to that content. For new, public-facing courses or features, adoption and engagement rates are severely impacted if you don’t first start by getting people to the page where they need to sign up in the first place.



What to measure:

  • For courses: Unique pageviews on registration pages for a specific course

  • For products: Unique pageviews on the Free Trial or Signup form page

  • For features: Unique pageviews on feature page (often requires Pendo or similar)



How to measure:

  1. Navigate to Google Analytics

    1. Select Behavior > Site Content > All Pages

    2. Filter by specific page titles or URLs relevant to this campaign

  2. If possible, run this same report but show the various marketing channels being used. The most common ones are:

    1. Paid advertising

    2. Email

    3. Organic (people who arrive to your site from search engines)

  3. Measure traffic to these pages & export csv

    1. Page views

    2. Unique Pageviews

  4. Compare these pageview counts to any advertising activity

    1. How many impressions did you get for your ads related to this activity

    2. How many emails did you send out

    3. Etc

  5. Divide the total number of impressions by the pageviews

    1. This is your success rate with bringing people to your site after engaging them with promotional activity

  6. Now compare this with your signup / conversion rate




Now that you’ve got the data, what are some tactics you can use to improve them? Let’s look at a few of these site visit measurements in more detail:


Measure: Count of people visiting site / count of people targeted with the promotion

  • If the % of people visiting the site after being hit with a promotional activity is very low (<1%) then consider improving your ad targeting or messaging

    • A/B test messages

    • Increase budget

    • Change / modify cohorts

    • Try a programmatic advertising platform like Choozle

    • Try a different advertising channel (Facebook, LinkedIn, etc)


Measure: Count of people viewing landing page / Count of people visiting site

  • If the % of people visiting the landing page where signup takes place is very low, then consider making this path easier

    • Put a link in the header

    • Send people to a different page / landing page

    • Reduce buttons on page

    • Analyze where people go & get stuck


Measure: Count of people completing registration form / Count of people viewing landing page

  • If the % of people completing registration is low, consider making it easier for people to convert

    • Put a free video on the page explaining as a teaser

    • Reduce the number of form fields



Ready to estimate your program value? Download the Budget & ROI Calculator




Conversion Rate by Course


What this answers:

How effective is the conversion path to sign up for my lessons?


After you have some means of identifying how many people are showing up on your site or course landing pages, the next step is to get them to sign up. A good benchmark to use is to expect about 5% - 7% of people will complete a form and signup for a free course. The way to evaluate this is fairly straight-forward:


Here’s the measurement:

Count of people who started the course / count of people who viewed the course page




You can measure this by course or if you can also look at this at the account level rather than at the individual level. Just like any software feature, there are some common reasons why people land on a page but don’t commit to completing the conversion.


Strategies to consider are:

  • greater sales outreach (have someone follow-up)

  • increase in touch points (in-app messaging or email automation)

  • Offer a preview video or lesson (show people what the course looks like)

  • Test a different price





Product Engagement

What this answers:

After someone has taken a course, what impact does that have on their use of the product?


One of the primary goals of Customer Education is to drive engagement with product features, particularly those that are believed to correlate with longer term adoption by users. The common way of measuring engagement with SaaS products is to evaluate how many users touch a product on a daily basis vs. a monthly basis.


These two metrics are expressed as:

  • Daily Active Users (DAU)

  • Monthly Active Users (MAU)


Here’s the measurement:

Engagement = DAU / MAU




An example:

An engagement ratio (DAU divided by MAU) of 0.3 implies that 30% of the users who use the product at least once in a month also use it on a daily basis i.e at least once in a day.


For Customer Education metrics, it is important to consider that you will want to see this for all users but also for users who have consumed your training content. The ideal outcome is that the cohort of user who have consumed your educational materials will engage more with those target features than those who did not take any training.


A common benchmark for SaaS feature engagement is that anything above 50% is considered the “holy grail.” So if you can help drive things above that number with training, you should feel great. There are a couple important adjustments you may want to consider with your data however:



How to measure engagement (outlined in greater detail here):

  1. Adjust for seasonality & holidays

    1. Evaluate your user behavior in Google Analytics, MixPanel, or Pendo

    2. If you have a weekly seasonal trend like activity only on M-F, then remove the weekends

  2. Separate Customer Engagement from User Engagement

    1. Break these two types of people into segments and evaluate separately

    2. Users are non-paying visitors & free trial

    3. Customers are paying accounts

  3. Look at engagement from key accounts

    1. Isolate the top 10% - 20% of your customer base

    2. Measure engagement for these customers separately from the others





Adoption Rate

What this answers:

Are new users continuing to find & engage with specific features?


Let’s say we’re helping a Product Manager launch an important new feature. We’re going to deliver some remote learning solutions through an LMS and have our writers craft a bunch of great troubleshooting guides for the knowledge base. Too often our evaluation of the success of these activities stops at Engagement. We see an initial spike in users trying out the feature , but then what? Ideally we’ll see all new users follow that same trend even though they may have never seen the initial marketing campaign.


Here’s the measurement:

Count of new users that engaged with the feature x 100 / number of total users




This will show us the percentage of new users who are engaging with the product or feature compared to the size of all users on the platform. Since what we want to impact is self-discovery of these features, it is important to measure how much our training content improves the likelihood of a user to follow-through and actually engage with the feature over time.


You can read more about ways to increase product adoption here.





Time to Key Activation Events

What this answers:

How long does it take for users to complete activities in my application that correlate with greater customer lifetime value?



A Key Activation Event (KAE) are behaviors or activities completed by users that appear to make them better customers. This measurement looks at the average time it takes a new customer to use a feature, or an existing customer to use a new feature for the first time.


An example of a Key Activation Event might be: We’ve discovered that our best customers almost always invite a coworker to the platform within 24 hours of signing up. So the goal of this Getting Started coursework is to drive people to take that step. Then we will measure whether the average time taken by new years to complete this step goes down.


Of all the measurements here, this is likely one of the more statistically complex ones. It helps to have a BI resource help with identifying these Key Activation Events. Usually it requires evaluating large datasets and running multivariate analysis to figure out what specific events correlate to these outcomes. When you find them, it is incredible.


The Time To Key Activation Events can be measured in in minutes, hours, or days depending on the relevance or expectation. For more information on KAEs and common examples, see this article.


How to measure Time To KAEs:

  1. Make a list of all actions in the your application that a user can take

  2. Evaluate the actions taken by new users as they explore the platform (using Pendo or similar)

  3. Isolate your “perfect customers” from amongst this analysis

  4. Identify amongst these events taken by the perfect customers, which are the common actions that happen first.

  5. These actions are considered your Key Activation Events (KAE) for good users




Stickiness

What this answers:

Do users continue to engage with target features over time?


We already discussed Engagement, which is a simple measurement of DAUs divided by MAUs. This serves a snapshot in time of how engaged your users are generally or with a specific feature. Stickiness is similar to engagement, but it looks at this interaction over time. Another way of asking this is: Do my users keep coming back?


Ideally a sophisticated Customer Education program will drive both engagement and stickiness. Courses that provide reasons to return to products, specifically targeted at accounts that appear to be disengaging, is complicated but achievable when the program is properly resourced and has access to the right data for decision-making.


What is the measurement?

Daily Active Users / Monthly Active Users [shown over time by week, month, or quarter]



For more on stickiness, you can read about it in a great resource created by Pendo here.







Feature Retention

What this answers:

Are my users building enduring habits inside my product?


Retention itself is not a difficult measurement to understand, however it can be a little challenging to report on. Often the data related to retention needs someone to actively manage it and remove noise from the dataset. Accounts stop and start, individuals use multiple emails for free trials, and people may not be considered activated until they complete a few Key Activation Events. It helps to have someone in BI or finance helping with this.


The question we’re asking with retention as it relates to Customer Education is: Are the training activities we provide making customers stay longer? This is usually analyzed in cohorts. That is, all new customers in a given month will be part of a group. You then review how many customers from this group are still paying after one month, two months, and so on. This is also referred to as a “survival curve” and is expressed a percentage.


Here’s an example:

In March 2025, our software had 1000 new paying customers. Let’s refer to this as the “March Cohort.” Next month, in June 2025, only 840 of the March Cohort accounts were still customers. The retention rate for the March Cohort was thus 84% in month 1. In July, only 655 accounts from the March Cohort were still customers. Thus the retention rate for the March Cohort was 66% in Month 2.


You will evaluate these Month 1, Month 2, Month 3, etc. figures over time for each new cohort. As you deliver training solutions, you would want to see the retention rate improve for those accounts that interacted with your education resources.





Support Requests & Ticket Volume

As your learners and users explore your solution, invariably they will seek help or try to answer questions. Two good methods for measuring support requests are to mine the data coming in through key support channels such as:

  • Support tickets

  • Knowledge Base search queries


Support Tickets

Support Tickets can be used to identity where users struggle with your solution. Depending on the request and importance for driving product adoption, these can make for good learning management courses.


Another good use of support tickets is to define what articles need to be published in your Knowledge Base. A good objective to evaluate your support tickets and identify the top 20% of issues encountered by users. Then try to write articles that cover in detail how to resolve these common issues.


Knowledge Base Queries

Many platforms allow you to see the words people type into the search window on your Knowledge Base. In some cases, this might require connecting and configuring Google Analytics.


We recommend analyzing these search queries periodically, every quarter or two, and speaking with the tier 1 support team to identify gaps in knowledge base articles. Make a goal of having 100% of the top support issues answerable by a knowledge base article. Then measure to see if the support requests for these types of issues goes down over time.



bottom of page