A way to measure skills that adapts as fast as technology
The science behind our new adaptive skill measurements
- select the date in page properties and the contributor at the end of the page -
Update: Adaptive skill measurement is now Pluralsight IQ.
Skills change—no matter your industry or role. That’s a given. What was needed for your job five years ago, or maybe even just six months ago, might not be as valid today.
As a technology professional or team leader, no one understands this better than you. With the rate of change accelerating, how do you not only keep up, but accurately and easily measure your skills? How can your team keep up? Start with our adaptive skill measurement, which - in as little as 20 questions and five minutes - provides a rating of your skill level and pinpoints knowledge gaps, so you know where to start learning.
We sat down with two of Pluralsight’s masterminds, Mike Kowalchik, VP of Product Architecture, and Krishna Kannan, Director of Product & Assessments, to uncover the science behind our adaptive skill measurements and what makes them unlike anything the industry has seen.
What makes our skill measurement technology “adaptive?”
(K): The questions you see are tailored to your skill level, and the next question you get is based on how you answered the previous one. So there's no fixed length or fixed topic coverage, but rather a highly personalized, dynamic experience.
So how do you determine the first question to ask someone?
(K): The system makes a couple assumptions. First, it assumes that the population of people that know that skill is normally distributed, which is a fairly safe assumption to make. So imagine that, and two-thirds of the people are going to be in the middle-ish, and then everyone else is off in the tails. So, we start you with the assumption that you're going to be in that average area, and we start you with a middle ground question. Then once you start answering, that's when the questions get easier or harder. And to facilitate this smart, adaptive nature, we employed Item Response Theory and Bayesian techniques.
Item Response Theory is an existing test concept where you look at individual questions and characterize them based on how many people will get them right or wrong. And Bayes Theorem lets you approximate the likelihood of an outcome given that something else has occurred.
Why did you decide to apply Item Response Theory and Bayesian techniques to these skill measurements?
(K): If skills are changing so fast, how can we assess someone's skills quickly and easily? And if the subject matter changes, reflect that change both in their score and in future sessions? This is where Item Response Theory and Bayesian came in. Item Response Theory is already used in adaptive testing, but the drawback is it’s a slow process. You have to write the test, validate it and then give it. And what happens if something changes during that process? You have to then get your authors and validators and put it back out.
So we use the Item Response Theory framework, and applied Bayesian approximation to it. We then have a new question, and we don't know how easy or hard it is, but we'll assume medium. We throw it in the skill measurement, but don’t let it affect anyone's score very much. Then, as we get data about that question, we grade its difficulty and make it count for more—and that's where the Bayesian piece comes in.
Our adaptive skill measurements have been designed to use both Item Response Theory and Bayesian methods in concert to create this new, accurate, completely revolutionary skill measurement, together.
How exactly do these skill measurements “evolve” with technology?
(K): Let’s go back to the Item Response Theory thing for a bit. One of the things that we don't do, which is really different, is typically you rate test takers. You're trying to find out how skilled a test taker is, but we're in parallel trying to rate how difficult a question is. So we approximate the difficulty of the question with a margin of error, and then every time a person answers it, we adjust our assumptions.
When we first write the adaptive skill measurement, we will actually put it through a validation mode with learners to rate the questions. So every time they answer a question, our assumption about that question moves up and down and becomes more certain. The skill measurement can evolve as new questions get added to it, and as people change the way they answer a question (i.e. as skills evolve).
What other methods do we use to determine if questions are becoming outdated?
(K): The more manual part to our adaptive skill measurements is to collect feedback from learners on questions. They can tell us if a question is erroneous, has a typo or some problem with it. Maybe it is no longer relevant. We have a team that manages the data, looks at these flags that come in and makes decisions about how to accommodate that.
We also have an internal app we use to monitor the skill measurements and the overall quality that provides both test-level statistics and question-level statistics. So, if an adaptive skill measurement is performing poorly, we can drill in and say: Oh, it's these five questions that are the issue. Maybe we need to yank them, change them, something.
Who writes the questions?
(K): Our authors and other experts who we’ve vetted extensively. We know their expertise and have invested heavily in the question writing tools to get great content.
We don’t give people scores. We give them ratings. Explain this.
(K): With our system, we're not actually giving you points; we're rating you. We have this big range of what your rating could be, and we get a better idea of where you fall in the range as you keep answering more questions.
So if I were to take an adaptive skill measurement now and I get my rating, what's it going to look like?
(K): You get three pieces: a rating, a label and a percentile. Ratings are on a scale of 0-300, where 150 is considered an average, yet proficient rating. Labels explain your rating—are you novice, proficient or expert? And then a percentile is given.
We display all this information with a slightly modified bell curve graph that illustrates how those things work together. So if you’re up and to the right, that’s good; down and on the left, not so good.
How does the adaptive skill measurement benchmark you against your peers and the rest of the industry?
(K): Our adaptive skill measurements benchmark you against your peers because they’re based on the normal distribution of users of a skill (what we discussed earlier). For each skill, we create a curve with a mean and standard deviation. And each rating is a point on that curve.
(M): One way you can think about this: it's almost like an analog to how a traditional test works, right? Every item isn't skill scored, but you aggregate enough items together and say: Okay, we get a sense of how good someone is based on the proportionality to how they answered that collection.
Why are these skill measurements so important to the tech industry specifically?
(K): It gives you a good indication of whether or not you can succeed, whether you can get the job you want, the project you want to be on—those are the macro concerns of our community. At a smaller level: How much better are you getting? How are your skills developing? What do you need to learn? This matters to tech professionals and businesses. Add those things together, and it addresses this macro issue of technology skill half-life.
How will this change how professionals think of testing?
(M): It actually makes tests more fun. Most people don't think of testing as fun, but because we're adaptive, we're always trying to find questions that will challenge you. Most people will take a test and there will be a whole section of super easy questions—and you’re bored. Or vice versa—you get into a section where you have no idea what these questions are and it's super frustrating. Whereas with our adaptive skill measurements, we are trying to zero in on your skill level, so you’re constantly challenged, but not frustrated or bored.
We want to get the maximum amount of information in the minimum time, so we’re respectful, efficient and accurate. And this way, you can get back to what’s really important—learning what you actually need.
Ready for an adaptive experience? Rate your skills now.