Click here to view the analysis presentation 

Opportunity

After releasing a beta version of the assessment product, Role IQ, the team had a chance to investigate how to increase learner engagement with the experience.
My Role

As Product Manager, I led all discovery, research, and hypothesis testing for this experiment.

About Pluralsight: Pluralsight is an online learning platform for technology professionals. 

The Approach

The Role IQ team chose to run an A/B test experiment to test a predictive skill inference model and determine whether it would increase engagement with the product. Predictive skill inference is a method of predicting a learner’s Skill IQ score for assessment(s) they haven’t taken yet. The findings were used to determine whether resources should be dedicated to building out a machine learning model in the Pluralsight product.

Key Research Questions and Areas:

  • Does inferring a learner score on an assessment they haven’t taken yet increase their likelihood of taking the assessment?
  • How can Role IQ utilize the predictive skill inference model in its experience?
  • Should resources be dedicated to building out a predictive skill inference model?

Study Design

Hypothesis: Prompting a learner with their predictive skill level for a given skill in their role will increase the number of assessments that they take within a role.

Experiment: The team ran an A/B test of different notification messages that targeted notification users who started a Role IQ but only completed one assessment. The control group received the traditional reminder notification, while the test group received messaging about their inferred skill level for the role. The experiment window capped at three weeks; overall, 2,749 learners saw the notification, 1,412 in the control group, and 1,337 in the test group.

Target Metrics:

  • Average # of assessments completed per user (who have completed at least one assessment) during the experimental timeframe
  • % of users who took an assessment after viewing the experimental notification

Covariate metrics:

  • # of views per notification category
  • # of clicks per notification category

The Outcome

The team found no significant difference in the average number of assessments completed by the control vs. test group users after viewing the experimental notification. However, we found a significant difference in the percent of control vs. treatment users who took an assessment after seeing the experimental notification.

Learners were more likely to take an assessment after viewing a notification containing a predicted Skill IQ than seeing a traditional role engagement notification.

The findings from this study were used to get buy-in for resources to build the skill inference machine learning model in the Pluralsight platform. As of June 2019, the model is in production and being used to power several experiences in the platform.