Podcasts

003 - Building better products using AI and ML

November 19, 2019

Curious about how we use AI and ML at Pluralsight?

On this episode, James Aylward dives into the development process of Pluralsight’s learner experience. He talks about the right way to implement artificial intelligence and machine learning, how to create a powerful culture and the importance of aligning with users.


If you enjoy this episode, please consider leaving a review on Apple Podcasts or wherever you listen.

Please send any questions or comments to podcast@pluralsight.com.

Transcript

Jeremy

Hello and welcome to All Hands on Tech, Pluralsight's podcast about developing technology skills and embracing innovation. I'm Jeremy Morgan. When building a SaaS product, most engineers love talking about things like our infrastructure, the cool cloud services we're using, all the nifty new technology like NoSQL or Elastic Beanstalk. But what about the experience? Having a solid infrastructure is crucial that the experience is what really matters to people using your product. It's easy to throw some AI and machine learning into your SaaS offering just to say that you have it, but are you using it in the right way? Can you truly leverage AI to make the experience of using your product better? How do you do it?

Today I'll be talking with someone who has done just that. James Aylward is our head of learner here at Pluralsight, and he's led cross functional teams that are focused on improving the learner experience. He talks about some of the challenges he's faced and how we're constantly striving to fine tune that experience for our learners. Let's welcome James Aylward. How are you doing today, James?

 

James
Great. How are you doing tonight, Jeremy?

 

Jeremy
I'm doing great.

 

James
Awesome.

 

Jeremy
You're responsible for learner experience here at Pluralsight. A little bit about what you're working on right now?

 

James
Yeah, sure thing. Part of the experience organization, we're responsible for all the learner facing experiences. How the awesome content comes together and how we meet each learner in their moment of learning with the right modality and the right content at the right time to match their needs, both technically what they're looking for but also what level they are from novice or expert, anywhere in between. How do we get you the right content, the right form fact, the right modality at the right moment? That's the sort of overall matching exercise for the learner experience.

And with that, we have our partners who build all that great content, but we also provide a whole bunch of really interesting data to technology leaders about their labor force and how well-skilled up they are to meet the needs of their industry as we progress. All of that means that the learner in the learner team, we have our search engine, our recommendation system, we have the home page, the course page, and the Skill IQ and Role IQ assessment engines that are able to assess a learner's proficiency and then provide a personalized learning plan of how to attack the strengths and weaknesses in an optimal manner.

We also have a question and answer functionality within the platform where learners can help other learners. Increasingly we're making our experience entirely responsive so you can use it through the mobile web or you can use any of our native apps, too, in order to learn. Long story short, we're trying to work out how to get the best bit of content to you at the right time wherever you are in the world.

 

Jeremy

Nice. That's some pretty amazing stuff. What was your journey like to getting here at Pluralsight?

 

James

Yeah, sure. My traditional background, I've been a product person pretty much my career from Staples onto Vista Print, and then we launched a startup called Gazelle, which was sold. Then I was looking at next options, and Fidelity Investments had an internal R&D incubator technology group called Fidelity Labs. They were looking at ways to bring artificial intelligence and machine learning techniques, which I had been using a lot somewhat in my previous roles of how to bring the right experience to every learner throughout previous roles and had some experience with. But it was more about, how do we up skill an entire organization with these new capabilities that artificial intelligence, machine learning bring to the table through a very established and traditional company at Fidelity Investments?

Through a lot of trial and error and getting things right and wrong, pretty soon I realized, "Hey, it's frankly all about empowering the domain experts with the right training and the right access to the right talent and data in order to bring AI and ML the promise thereof to life. What happened after a while I was building my own Pluralsight internally in Fidelity. When the Pluralsight opportunity arose, I was like, "Look, this is a really compelling mission and I feel like it's something that the world needs is just a better understanding of what AI and machine learning can do."

And then beyond that, any tech skill as soon as you empower someone with a little bit of knowledge about what a great use case for certain technology is, then provide enough training for them to start that learning journey, you can really change people's outlooks and outcomes in their lives. The mission, I seen it with my own two eyes just over and over again. I was really drawn to what Pluralsight was doing. I saw a huge need in the market for that, and not just the market. I see it as a huge need for the world to be able to unlock this human talent through providing easy access to really high quality learning materials within technology.

 

Jeremy

Yeah, absolutely. With the mix of AI and ML and then our other design strategies, it seems like we're going into a data-driven design and a human-driven design. Do you think there's a lot of conflicts between those two things or do you think that they can be meshed together?

 

James

Oh, that's a key learning. Yeah, I see that as a huge need for human-centered design, and we need to always keep that. What the user wants is paramount. Having AI and ML is just another few more colors in your palette from a product design viewpoint or a human-centered design in view. But having that knowledge of what those capabilities can do provides more opportunity to fit that human need. Building a AI or ML or blockchain or whatever technology just for the sake of using that technology, it's not going to hit the mark. You're not going to find the product market fit.

Like any good product development process, identifying that right need and then working out the best way to fit that need in a way that no one else can, that will start to get you the product market fit that you desire. Then if you are able to use AI and ML to do it in a way that no one else can, that builds a further incentive for people to use your product or your experience over others. I see them as highly complimentary, but you need to start with a human need. Then if you can layer on the extra unfair advantage of AI/ML, you can really make an impact.

That was actually another reason why I came to Pluralsight is because we weren't doing AI and ML just because we need to do AI and ML. We were using artificial intelligence and machine learning to do that meshing exercise between an enormous content library and matching the different modalities and different web clips to the myriad of needs and levels of the learners at anyone point in their lives. That's a matching exercise that cannot really be done at scale with a rules-based development process. You need to have a probabilistic engine behind that in order to match this content to best learn for the optimal outcome. Our optimal outcome is reducing that time to skill acquisition. Whatever we can do to get you started, that's what we optimize for and that's what our algorithms based around, which to me is cool to be able to use AI and ML in order to teach AI and ML. Their own meta thing there. But we're not just doing AI and ML for fun. We're doing it because it's got a real reason for us to get a human-centered need.

 

Jeremy

Do you think there's some changes to the human process, say something like Lean, for instance? Do you think there's a change that has to be done to the human process in order to get that feedback quickly and get it into the system quickly so you can make the changes to the AI and ML?

 

James

Yeah. Product development for AI/ML is an underappreciated art, I think. It's hard enough to find that traditional products' intersection of having a product that's desirable, viable, and feasible. But to add in the AI, you need to have a dataset that has the right qualities in order to make a prediction, for example. You also need to have the right infrastructure to be able to have models in production that respond to a learner's needs and also get smarter over time, which we do. Then you need to have the right computers like super computers to be able to crunch all that data in realtime, build new models. Then finally, how do you deliver that to the user? How do we explain to the user why we've come up with a recommendation we have, which is, again, a human-centered challenge.

Now all of that adds to the five or six levels of complexity on top of what is already a tough job, which is finding great product market fit for product managers, engineers, designers, data scientists, and all those people we have focused on that today. It's an emerging enterprise, the product development for AI/ML. What you're hitting on there is the need for an automated feedback loop between the learner or the user and for other people back to the dataset in order for that prediction engine or what have you to improve. Yeah, that's just one element of things to be aware of as you build out AI and ML. It introduces way more risk in, can you actually do it? Your feasibility risk goes up. But if you get there and if you're able to have a AI and ML-based products that hits a human-centered design that people really care about, your product just takes off.

The other thing is the team that can get there the quickest usually gets that dire advantage because their product starts learning quicker than everybody else's. You start to amass a bit of a moat in terms of your product over somebody else's because you have that dataset, you have that data asset that is increasing over time, and the people who are trying to follow you into that market do not. There is a first mover advantage for anybody who can crack the nut there.

 

Jeremy

I've wondered about this. Just curious. Would a smaller unit of work mitigate the danger in that feedback loop? It sounds like we have an experimental process going here. Instead of grand, large, huge experiments with a lot of different features to it, does making that unit smaller to where we'll try smaller experiments at each time ... Does that mitigate the danger of going down the wrong path or training the model in a way that we wouldn't want to?

 

James

With AI/ML product development, the normal product development, yes, you want to have small experiments, small units of work to be able to see whether there's any appetite at all for PEO Value Proposition in the market. To a certain extent, that's also true as you build out a diet data science stream to help with product development. Where it gets a little tricky, though, is that you need a certain level of data in order to produce most of the AIML applications that will make sense. In some regards, you need to have a good amount of data to even test whether something will work or not at scale. Unfortunately, sometimes you have to build it to see whether it works.

The other way you can do that is try and fake it. But in order to fake that, do a Wizard of Oz-style test where you are seeing whether a user actually wants that experience or not, you're presuming the AI or ML will work most of the time. If you are going to rig it up so that you can see where your prototype works with customers before actually building out a full scale model, you might want to introduce some deliberate wrongs to it because what I've seen product teams do is say, "Yeah, if we had this AI/ML, it would always work great and it'll be fantastic and it'll be perfect for the end user." But what happens is because it is, by its nature, a probabilistic solution, it will sometimes be wrong.

That's why you don't click on a movie recommendation from Netflix every single time. It's sometimes wrong. With Spotify, it's sometimes not a great song. If you're going to rig out a test to see whether the AI/ML product will be interesting to your end users, you need to introduce this level of wrongness that is a new skill for product development and the AI/ML space. Anyway, you can do it and you should be testing regularly in small increments, but there is a need to get not the entire dataset, but enough data to really be able to test whether the models will work in parallel as you build out the normal product development process that you're running.

 

Jeremy
How is feedback gathered for that process? How do you know if the model's working well and how do you gather that feedback from the user, from the learner?

 

James
We have a process called Directed Discovery, which we use for product development, which involves a lot of voice of the customer qualitative interviews to begin with. But we also look at prototype testing and then, did we actually hit the mark when we get this into production? Do we have that large scale customer acceptance? Those three elements we are always ...

The first one is identifying that human-centered customer need. When we do that, if we identify it as a human-centered customer need for something, the prototype is then, can we build an AI/ML prototype that shows the power of what this could look like when it's real at scale? At that point we're able to show that to customers and get some realtime feedback as to whether they like the prototype or not.

The rubber really hits the road when we get it into a large scale because it's not easy. There's a lot of AI/ML prototypes out there that never get to production and that's not just us. That's everyone in the industry because you mentioned you're in DevOps before. DevOps for AI and ML and having real models in production, it's a different animal. Because you're making a prediction generally or maybe clustering based on realtime data that moves through the system in [inaudible 00:14:39] , it's not like you've written a line of code and it stays the same. You've written a model, but the data might change or the industry might change. Something might happen to the user base and your model drifts out of production or out of alignment with your goals.

Our team needs to constantly track how we're performing on a recommendations engine, for example. Seeing we have five or six or seven different models any one time in production and we see how many people are not just clicking through our recommendation engine. We're seeing whether they view the whole video or enough of the video to indicate that they were happy about it. Did they provide ratings? Did they continue on their learning journeys throughout the experience or not? It's not just about the click. In that element where we've looked at, we've heard the voice of the customer, we've done some prototyping, we feel good about it, we've built it up to a production level environment, and then we have constant realtime tracking on, is this hitting the customer use case? If it's not, then do we have other approaches or models that can jump in there, for this example, it's the different models that compete over the recommendation engine, to see if we have one that that better matches the human-centered need?

In this case, it's almost like the AI/ML is iterating in realtime and getting somata and doing its own product development to make our product or recommendations better. It's fascinating to me. It's something that is really the next step in product development. But at the end of the day, we're just trying to hit the same need that every product manager has and that's, how do we make sure that our product fits the needs of the learner?

 

Jeremy

What has it been like building product teams? You've been building them across geographies with a bunch of different skills that are required and a bunch of demands. Like you were just talking about, there's all these different demands and considerations. What has it been like building these product teams?

 

James

That's been a lot of fun in itself. Well, our whole mission here is to democratize technology skills. It's funny that I'm starting to answer your question here using the company mission, but we really back into our systems architecture based on our company mission. The quickest way we feel like we can get there to democratize technology skills is to have small cross-functional, empowered teams working at scale to derive at that mission. What we identified three or four years ago, we identified, "Hey, we need to move off a monolithic architecture onto a cloud-based architecture, polyglot-style approach where we have micro services that have full autonomy within what we call a Bounded Context.

Right in front of me right here in Boston is a search team, and they have full autonomy over the search engine. What that means is the entire tech stack they choose with help and input from our architecture team, but they are fully empowered to choose the tech stack that meets their needs. Then we have the right skill sets within that team. We have three or four software engineers that have come up to speed on search techniques throughout the world and become experts on our new search engine. With that we also have a product manager and a product designer, different ways to provide search results to our learners.

Got architects and DevOps just to directly support that team. They can launch to production and build in production all day long. It's not like there's a steering committee where they had to do 45 PowerPoint presentations in order to get something out [inaudible 00:18:28] They need to move to production quickly. Because the whole team has gone through some of those voice of the customer sessions and also done the prototype testing, they have the context to be able to make the right decision quickly in order to maximize the value for the learner. And by doing that, enabling that, and empowering that, we're able to move towards our company mission that much quicker.

It all starts from the mission and comes down to, how do we have an operating system that enables us to have that scale across all these autonomous teams? That's why we have a lot of focus internally on practices and we use Direct Discovery, which is out there. We also have Engineering at Pluralsight, which is an operating system for us and how we work and how we achieve the outcomes we're looking for.

Then the last bit of it is we use the OKR process, the Outcomes and then Key Results. We want to work out what our objectives are and then work out how we measure progress towards those objectives on a six month basis. That enables the teams to know where they're shooting and work out the best way to produce that outcome, all of which reduces dependencies. It reduces a whole bunch of requirements to run things up and down hierarchies. They're ready to go there. They've got the agency and they've got the expectations to create a possibility in whatever way they can in order to achieve the outcome that led us up to our company mission. It goes all the way from company mission down to "hey, what am I doing? It's stand up today." It's that refined.

 

Jeremy

It makes an excellent North Star, I would say, the mission.

 

James

Yeah, totally. It's one of the reasons people really enjoy working here at Pluralsight is that autonomy.

 

Jeremy

In order to have that autonomy, there has to be an element of trust, of course. I also work here at Pluralsight. I've been here for a little while now. I've noticed there's a lot of trust between the teams. How do you build that or encourage that? Do you think that's mostly in the way that we hire people or is it something that we instill in the culture?

 

James

So, yeah, we spend a whole bunch of time on team building. As we build out these experienced teams, yeah, and the band of context teams, we deliberately look for alignment of talent and how do we build up that trust? A part of that is to have teams involved in the hiring process and hire people into their teams. Through that, we were also encouraging a lot of diversity of thought, in particular. We've moved away from cultural fit a long time ago. I would say it's more, can people live up to the values that we have internally at Pluralsight?"

Because we really walk the talk there on the values we've all agreed to operate by within Pluralsight, I can have the trust that if I go to any office in the world in Pluralsight or any of our remote locations or engineers, know how to have that conversation. I know what operating system we're working on. That is really how you drive scale as we're seeing with the success and continued growth of Pluralsight.

There is also a systematic element in here of trust, and that's partly because of our engineering and Pluralsight approach where we actively encourage a lot of pairing and mobbing. Part of that is because we do not have a traditional quality assurance process in-house software development. If something goes down or something goes bang in the night, it's the team that developed that code that is responsible to get up and fix it. Now what happens there is because they have that ownership of the maintenance aspect of their product development as well as the development part, it ensures that there's a really healthy approach to building scalable code that will work in production. Because of that, there's a whole bunch of trust that needs to be developed amongst teams. If teams on the rare occasion that things do go wrong, they feel bad because they've lost that trust internally and from the teams. It's almost like, "Hey, I have to really work on ... " It's okay and building them back up a little bit rather than potentially other engineering cultures.

 

Jeremy

It seems like the way you described it is a positive feedback loop. Essentially if you give everybody autonomy and the freedom to do whatever they want, they're going to be a little bit more cognizant, a little more directed in what they do to avoid losing that trust that's built.

 

James

People are here to drive towards their mission of democratizing technology skills, right? People don't want to let each other down. That actually builds a really positive culture because you're out looking after each other and trying to make sure it all goes smoothly.

 

Jeremy

What advice would you give for tech leaders that are thinking about that model? Would you say to give everybody the autonomy they need and give them the freedom to do what they think is best and reign it back in and see how that works?

 

James

It's not just "hey, you can do whatever you want" teams. As a leader [inaudible 00:23:39] , you need to make sure that everybody in the team understands that North Star, as we mentioned, but also be able to break that down into if your team is super successful, what does that actually look like? How did we walk towards that or run towards that North Star using the talent and resources available within your team to help other teams out. That's not at all telling them, "Hey, go do this and go do that and have a gunshot and here's a timeline." It's about saying, "Hey, we want to democratize technology skills, but let's use the search team," again, as an example. It's to do that, an outcome from the search team, and this is co-created, would be to reduce the exit rate on the search page because that indicates that people didn't find what they're looking for.

Now how they do that and how that all works out, what works best with customers is fully up to that team. They have all the autonomy in the world to go and find solutions that help us meet that key result. But you have to provide some cohesion and vision of where we're all headed to from the overall objective and key result. That is the connective tissue of what the leadership level does. But it's not about gate keeping and being the smartest person in the room and telling people what they should do on a timeline. That just doesn't work at all at scale if you're looking for agile product development. And when I say agile, I don't mean the methodology. I mean truly agile like responding quickly to what customers need.

 

Jeremy

You've been here a little while. Speaking of scale, we've scaled up quite a bit. What are some things that you've learned along the way since you started here? Big takeaways.

 

James

The trust is huge. I'm glad you brought that up. But how do you develop trust? How do you build a culture? In my world, it was, because I'm also in charge of the Boston office here for the experience organization, we brought ... This office was 25 people when I started and now we're up to just over 80. The idea is we had an awesome culture and experience when I joined. I was really lucky to inherit that. How do we grow that quickly without losing the coolness or the cohesion that we've developed in those 25 people who were here originally in Boston? That's all about really being declarative about what our values are. We take onboarding very seriously across the company all the way from the company-wide level down to each and every. Each and every team has their own onboarding approach. But the idea here is that everybody's onboard at every level. Part of that onboarding means, what is the company mission? What is the learner organization mission? What are we driving to? What does every team have as some objective that they're trying to achieve for that six months?

Breaking that all down for every member, also talking about how we do things. That's Directed Discovery and the product development. Well, then it's also engineering at Pluralsight and being really declarative within those two methodologies of what we do. That helps you scale. It's like working on practices and operating system, operating procedures of how we do things that lets me confidently build out team after team in order to democratize technology skills. If you're going to spend a whole bunch of time on architecting the solution and telling and breaking that down, telling people to build into that, that's way less valuable than spending time on the operating system, the value, the cultures, and building up a culture story where you can hire smart people and ask them to work it out.

 

Jeremy

In your journey so far, you've had a pretty long career. What are a few things that you wished you knew when you started out that you know now?

 

James
I've learned a lot along the way is ... I don't know. When I was a product person leading different product launches and things at various companies, I probably took way too much of the credit myself. For that product launch or we launched that product and we made millions. Yes, we did. It was great. We made a whole bunch of money and everybody came together and we changed the way people, I don't know, Vista Print bought business cards or something. But I think the actual product team, yeah, we did a lot of cool stuff, but it's the system around that product team that enabled that product team to do what it did. Which means you had to have cutting-edge engineering, you had to have a great architecture team, you had to have security both physical and also systematic, and cyber security across the whole system.

How does your HR system work? How do your incentives align to make the outcome that you want to have? How did you hire, how did you onboard? The whole system around our product team is almost more important than the actual product team. Good point. I see a really good debate there. I could argue with myself for a long time. Success doesn't happen without both of them, right? You need to have a great product team, but you need to have a great company in order to have a great product team and at least definitely to scale at any way, that's for sure. As I've matured, oh, my God. I'm so thankful for the other functions across the company that do their best every day to ensure that my teams can flourish as well.

 

Jeremy

What advice would you give for somebody who's wanting to get into product development

 

James

I tell you what the dysfunctions that I see sometimes to avoid. The first and foremost is it scares me a lot when I see a person or team building something so that it's perfect or doing endless research or making their user journey maps look beautiful before they've actually shown the product or the prototype to a customer. If they've never talked to a customer in the first place, that's a real problem.

The advice is you start talking to your target customer. Identify who that might be, the lean business canvas if you want to start there. But you have to be okay with showing stuff early often and being okay with being totally wrong all the time. The more times you can be wrong, the quicker you get to something that's great in that early product development. That's very true particularly when there's no product market fit. You try to workout what life cycle you're in.

If you're working as a product person on a more established product that's at scale, same holds true. How do you get to production quickly with your prototype? How do you test things live in front of the real end users as quickly as you can and get that is learnings. That learn, build, measure loop, if you can make that happen quickly and fast, you will get to a position in the marketplace that provides the impact you're looking to have, be that monetary or from whatever other outcomes you're looking for. Get started would be my bit of advice and also don't be afraid of it failing too much because if you can get started failing quickly, it's not that big a deal. It becomes a real big deal if you've spent weeks, months, years building something and have no idea whether it's going to work and then launch it because then you've got a lot of pressure on your end and probably out of money at that point from my a runway viewpoint.

Yeah, it's not always the easiest job, but being okay with looking into the future, trying something out, being wrong, and then working out why you're wrong. This is the other thing. People see product launches as potentially as black and white, but they're never black or white. There's always somebody who liked an element of something and then you look at that group and you say, "Okay, those people like this thing. Is that really the real product here? Should we double down on that? Are they more people like that person who liked that feature on whatever you've launched?" Then work out whether you can build into that. You might've unlocked a new use case within ... Your first assumption was totally wrong, but what are the learnings, where are the metrics, how can you find the learnings out of whatever you've launched in order to start that walk towards something that somebody really cares about?

 

Jeremy

Any cool projects you're working on right now or anything that you could tell us about?

 

James

That's part of my job is working out what all the teams are working on because they innovate quicker than I could even catch up to sometimes. We just saw something in development. We've got a lot of things in development right now. We're thinking increasingly of how do we get closer to the technologist's workflow entirely. But right now one that we launched at, or hinted towards that, Pluralsight Live in August is Project Voyager, which people can check out. It's looking at different ways to represent the potential learning maps that people see in the learning journey so that instead of it being how you video course and then as separate experience going over there and taking a skill IQ or using a PAF, it's putting it all into one personalized learning journey.

Right now we have three different experiences out there. One for GCP, one for Azure, and one for AWS. It's all cloud-based right now. But the idea is you can take a learning component, and that might be a video course, it might be written content, in the future it might be an interactive course from our interactive hands-on learning project. Then you do a learning check. We see, did you really understand the last bit of content? Then we're able to, in realtime, workout where to optimize your time. It's better for you to go back and take something a bit more foundational or to just skip an entire unit because you're obviously clearly up to speed on it and be able to throttle your learning experience to your level and your outcomes and what you're looking for. That's Project Voyager. It's really small right now, but we think it's going to grow into something pretty impactful very soon.

 

Jeremy

Thank you very much for talking with me today.

 

James

Yeah, no worries. Been a lot of fun.

 

Jeremy

Anything I didn't ask or anything that you'd like to talk about?

 

James

Well, it's all there. I could talk for hours and hours. When you get it right, you just have a lot of fun getting out and coming to work every day. People ask, what keeps you up at night? The last time I was up thinking about how we connect all of these different pieces and the different experiences that we're building. We talked about the polyglot architecture. I get nervous that we're not putting it together quick enough. How do we get going and going faster? That to me, when I'm nervous about that, it means I'm in a really cool flow state. We're building the futures as quick as we can. It's every day's up and down, but right now I feel like we're having a lot of ups. It's perfect.

 

Jeremy

Having a solid mission behind everything is crucial for any company. When you're not just trying to move more widgets, it makes a big difference when there's, there's meaning and there's an actual solid mission there that means something to helping people's lives.

 

James

Absolutely. Yeah, totally.

 

Jeremy

Thank you for listening to All Hands on Tech. If you like it, please rate us. You can see episode transcripts and more info at pluralsight.com/podcast.