8 things you can accomplish in a few hours on Pluralsight

If you’re a professional in a technology role and are new to Pluralsight, it might feel a little bit like being a kid in a candy store—or like trying to drink from a firehose.

With more than 7,000 courses on topics such as cybersecurity, Docker, Python, React, Angular, cloud architecture and more, it can be hard to know where to start. So we decided to go straight to the source to parse out some of the most fundamental, useful things you can learn, build and do on Pluralsight in a short period of time.

We asked several of our expert authors, “What could someone accomplish with just a few hours of devoted skill development time on Pluralsight?” Here’s what they had to say:

You can create a search application from scratch

Search is one of the most misunderstood functionalities in IT. Your users absolutely need search, yet developers only think about search when it is missing or poorly implemented.

Did you know that you can create a search application using a search engine like Apache Solr with a library like SolrNet in a few hours? Here are the steps that you need to follow:

Step 1: Get Apache Solr, the search engine

Search engines were historically too complex or too expensive to build, but Apache Solr changed that as a free open-source tool that’s easy to set up.

Go here to download Solr, then watch the Solr Confinguration module of the Getting Started with Apache Solr training to get started.

Step 2: Create the schema for your data

Solr is a NoSQL store, which means that you can add data to your search engine and the schema will be updated with new fields (like columns) to accommodate your data—this is what’s called a managed schema. While this approach works, it is recommended to create a schema to model your data properly.

Watch the Content: Schemas, Documents and Indexing module to learn how to create your schema.

Step 3: Index your data

After firing up your search engine and creating a schema, you need data. You can index data using the POST tool that’s included in Solr or you can index data using .NET. Watch the Making Your Content Searchable with Indexing module to learn how to add data to your Solr index programmatically.

Step 4: Download and explore SolrNet

SolrNet is a library that abstracts the functionality of Solr so that you can create an application in .NET. Download SolrNet from this GitHub repository, open the C# project and explore it to understand how it works.

Step 5: Download the SolrnetSampleApps and compile it

SolrNet has a sample application that can be modified in just a few minutes to allow you to search the data in your index. Download the SolrnetSampleApps GitHub repo and compile it.

Step 6: Modify the class that represents your data

Open the SolrNetSampleApp, look for Product.cs and include the properties that you specified in your index including the type of each property matching the field type in schema.xml. Change the SolrURL to point to your local Solr instance. Set the facet names to one or two fields that you would like to use to drill down while searching. 

If all this sounds complex, it is not. Watch the Getting Started with SolrNet – Your .NET Search Library module to learn how to modify the sample application to run with your data. Also, there’s a 5-minute demo that gives you a very high-level overview.

Step 7: Run your application and search

Press F5 in Visual Studio, run and enjoy searching in your application! Now you can upload to a web server in the cloud that includes a Solr instance and share it with the world.

- - -

Xavier Morera

You can deploy an application to Kubernetes

Containers and Kubernetes have revolutionized the way we build and deploy applications.

By letting us package applications and dependencies in a standardized, portable format—and run them on a standard runtime—containers have all but eliminated the dreaded “Works on My Machine” issue. Kubernetes takes things to the next level with features like intelligent scheduling, self-healing, automatic scaling and zero-downtime rolling updates.

Put the two together, and you have a powerful set of tools ideal for cloud-native applications that can deliver services in dynamic business environments.

Here’s how to take a simple application all the way from source code on GitHub to a fully running application on Kubernetes that is able to self-heal, scale and update on a rolling basis.

Step 1: Understand the importance and role of Docker and Kubernetes

If the terms “cloud-native,” “self-healing” or “rolling updates” sound like a foreign language to you, watch Docker and Kubernetes: The Big Picture to grasp the fundamentals and learn how containers and Kubernetes deliver these features in concrete ways.

Step 2: Grab the source code

Next, this GitHub repo provides a simple web application based on HTML, CSS and some JavaScript. It also provides supporting Docker and Kubernetes files to help in later steps. 

Clone the repo to use for the remaining steps: $ git clone

This will give you the app code you need to get started.

Step 3: Get Docker and Kubernetes

Here are a couple of simple options if you don’t already have Docker and Kubernetes:

For Docker, either download and install Docker Desktop for Windows or Mac, or—if you can’t install software on your laptop—use Play with Docker.

For Kubernetes, you can visit this link at MSB to get your own secure, private cluster to play around in. (There are some other hosted Kubernetes cloud services that are relatively simple to set up, but they might cost money.)

Step 4: Build a Docker image

The root of the git repo includes a file called Dockerfile. This is a special file containing instructions to help Docker build the application into a Docker image. Watch the Containerizing an App module of Docker Deep Dive to learn how to build the application into a Docker image.

Feel free to push the new image to Docker Hub if you have a Docker Hub account. If you don’t, you can either sign up and get a free account, or just use the publicly available image at nigelpoulton/ps-web:1.0.

Finally, watch the Working with Containers module of Getting Started with Docker to learn how images relate to containers and how Docker Hub works.

Step 5: Deploy to Kubernetes

Kubernetes requires containers to be packaged as Pods. Watch the Working with Pods module in Getting Started with Kubernetes to learn how to package and run the image you just created as a container inside a Pod on Kubernetes.

Step 6: Expose the application to the internet

Kubernetes uses a Service object configured with type=LoadBalancer to expose Pods and applications to the internet. Watch the Kubernetes Services module of Getting Started with Kubernetes to learn how to expose the application to the internet via a Service.

LoadBalancer Services integrate with your cloud provider’s load-balancers, meaning this will only work on Kubernetes clusters deployed to a public cloud or on MSB.

Step 7: Connect to the app

Copy the Service’s public IP and use it in a browser to connect to the app.

Step 8: Take it to the next level

Kubernetes Pods are great. However, they don’t provide self-healing, scaling or rolling updates. For this you need a Kubernetes deployment.

Watch the Kubernetes Deployments module of Getting Started with Kubernetes to take your application to the next level by deploying it inside a Kubernetes Deployment and performing some scaling and update operations.

Fill in the gaps

Packing and deploying applications to Kubernetes is simple once you know how to do it.

We’ve just pieced together several modules from different courses to help you master aspects of Docker and Kubernetes, but if you feel there are gaps in your knowledge, you should go back and watch the following courses in order: Docker and Kubernetes: The Big Picture, Docker Deep Dive (the first seven modules) and Getting Started with Kubernetes.

- - -

Nigel Poulton

You can master manifold learning algorithms

Mark Twain is believed to have quipped that “history doesn't repeat itself, but it does rhyme,” and plenty of data sets in today’s big-data enabled world bear this out. Consider the following examples—some intuitively obvious, others less so:

  • Prisoners up for parole are far more likely to be granted parole if their cases are discussed by judges early in the day, or right after lunch or a snack, a famous peer-reviewed study found.

  • Stocks in different parts of the world have long displayed a tendency to rise at month-end, year-end and ahead of long weekends. The “January effect” in American stocks was first documented several decades ago.

  • Levels of energy, motivation and mood tend to improve as days get longer and fall as the number of hours of daylight decreases.

Evidence like this of complex, periodic patterns in data across domains and disciplines is extensive, and the task of honing in on those patterns can also seem pretty overwhelming.

This is where Pluralsight’s rich library of machine learning (ML) content can help. The specific family of ML techniques that can help here is termed manifold learning, and applies to data where the manifold hypothesis holds true; this manifold hypothesis posits that data that seems very complicated is sometimes not that complicated at all, provided you look at it the right way.

Here is a step-by-step guide for how you can start with virtually no programming experience and quickly implement a machine learning technique to identify patterns of the sort described above:

Step 1: Getting started

Start with Pluralsight’s beginner-level course Python for Data Analysts, and watch the first module Getting Started with Python for Data Analysis. This module is about 45 minutes in length, and will help you get Python installed on your computer.

Step 2: scikit-learn

Next, switch to another beginner-level Pluralsight course, Building Your First scikit-learn Solution, and watch the first module, Exploring scikit-learn for Machine Learning. This module is also about 45 minutes in length, and will help you understand, conceptually, how machine learning differs from other approaches to data modeling. (You will also have scikit-learn installed on your computer by this point.)

Step 3: Prepping your data

You are now ready to take the intensity up a notch; try this Pluralsight course Preparing Numeric Data for Machine Learning. The module that you ought to focus on is the first one, Preparing Numeric Data for Machine Learning. This course is an advanced-level one, but have no fear—the first module is probably more at a low to intermediate level. Plus, by this point, you are well-equipped to understand the material covered by now.

Step 4: Implementing manifold learning algorithms

You are now ready to move to the advanced-level Pluralsight course Reducing Dimensions in Data with scikit-learn. You can jump straight to the final module, titled Dimensionality Reduction in Non-linear Data. This will help you implement manifold learning algorithms, which are specifically designed to unroll data in complex shapes.

Step 5: Finding patterns

You can now apply the techniques you learned in Step 4 to any time-series data and search for recurring patterns. The NYC taxi data set lends itself well to such an exercise; you could create x-variables corresponding to time-of-day, day-of-week and day-of-year and whether or not it’s a holiday, and attempt to predict attributes related to demand. Another dataset which lends itself to this kind of modeling is the Bike Sharing Dataset. You can also find plenty of other datasets on Yahoo Finance, Kaggle and other online data repositories.

At this point, you are ready to set off on your own exciting journey into the world of machine learning and data modeling. By now, the prospect of implementing all of these algorithms will not seem daunting to you in the least.

- - -

Janani Ravi

You can add elasticity to your cloud

Software companies are dealing with a tough problem: As COVID-19 continues to move a large portion of the population indoors, the unexpected surge in internet traffic has caused websites around the world to crash and slow down.

For websites in the cloud, one important practice can make all the difference between succeeding during this unexpected traffic and failing completely: elasticity. Having your infrastructure scale up when traffic is high and scale down when traffic is low is key to both serving customers’ needs and making sure your company isn’t wasting money. This practice isn’t just important during our current instability. One of the key benefits of elasticity is that you aren’t paying for resources that aren’t being used, and that’s ideal for times of both high demand and low demand. 

There’s no better time to learn this important concept and put it into practice at your company. Here are some courses, modules and clips for learning elasticity in the cloud:

Elasticity for Amazon Web Services

If you use AWS (or just want to learn how elasticity and auto-scaling works for the world’s largest cloud provider), the first step is to really understand what elasticity means, and this clip will help you do that quickly. Next, you’ll want to either learn or refresh your basic understanding of creating and working with EC2 resources and load balancers. The Getting into the Virtual Machine with EC2 and VPC module in the AWS Developer: Getting Started course is a great way to do this.

Finally, the AWS scalability path is an excellent collection of courses that cover elasticity and auto-scaling in depth.

Elasticity for Microsoft Azure

Azure is the second-largest cloud provider in the world and has a huge number of companies relying on their worldwide infrastructure to power their applications. First, take a look at this clip, which explains what scaling really means in the context of Azure. Next, take the Designing for Azure Autoscale module in the Designing for High Availability on Microsoft Azure course to understand how to set up auto-scaling resources. Finally, another great hands-on course for setting up auto-scaling for developers on Azure is Microsoft Azure Developer: Developing for Autoscaling.

Using any of these resources is going to get you one step closer to being a master of elasticity and auto-scaling—and most importantly, ensuring your application is prepared for any type of traffic.

- - -

Ryan Lewis

You can do more with your organization's data

There are a range of specific techniques for data analysis that will help you immediately change the way in which you work with your organization’s data. From t-tests to Bayesian A/B testing—to detecting outliers and anomalies and performing Association Rules Minin—here are 10 practical, self-contained and logically connected ways to do more with data.

Step 1

Start with the course Interpreting Data using Descriptive Statistics with Python, and go through the module Working with Descriptive Statistics Using Pandas. Actually go through and write the code and execute the commands—this will give you a good practical understanding of measures of central tendency, such as the mean, as well as of measures of dispersion, such as variance and standard deviation.

Step 2

Now move to the course Interpreting Data using Statistical Models with Python and study the module Performing Hypothesis Testing in Python

A hypothesis is a sophisticated term for a hunch, an idea or notion that you might have which needs to be tested and proved or disproved. Using the Kaggle Bike Sharing dataset (and another dataset with blood pressure readings from before and after an intervention), you will learn the correct, rigorous way of evaluating hypotheses.

Step 3

Now that you have a solid grasp of the required statistical concepts, its time to move to machine learning. You can do this with the course Building your First scikit-learn Solution where the module Building a Simple Machine Learning Model with scikit-learn will help you understand how to code up a basic linear regression model.

Step 4

Ordinary Least Squares Regression is by far the most popular form of regression, but with a minimal amount of additional effort, you can exploit some powerful variants such as Lasso, Ridge and Elastic Net regression. 

In the course Building Regression Models with scikit-Learn, the Building Regularized Regression Models module has a comprehensive look at how such regression models can play an important role in eliminating extraneous or unimportant X-variables. This is important because some X-variables might seem intuitively appealing, but can actually do more harm than good if included in a regression model.

Step 5

Another important aspect in data modeling, particularly while building linear models, is your choice of axes. The choice of axes determines your perspective and frame of reference, so poorly chosen axes will yield a poor perspective. You can learn how to avoid such pitfalls in the course Reducing Dimensions in Data with scikit-learn in the module Dimensionality Reduction in Linear Data.

Step 6

The process of Exploratory Data Analysis (EDA) actually comes before you start building models, but the true value of EDA becomes more obviously apparent after learning model-building. In the course Building Features from Numeric Data you can learn various techniques for pre-processing your data and visually identifying outliers and anomalies. Such data points can gum up your model and cause serious harm if not eliminated or pre-processed before model-building begins. The Building Features Using Scaling and Transformations module is the one you can study for more on this topic.

Step 7

The term "data mining" might seem dated, but there's a lot of substance in many data mining techniques. As you will learn in the course Data Mining and the Analytics Workflow in the module Using Data Mining to Find Patterns, one such technique that has withstood the test of time is Association Rule Mining. In steps 1 through 6, our emphasis was on continuous data, but working with categorical (discrete-valued) data is extremely important too.

Step 8

You have built a great regression model, and obtained a great R-square. Great! Now, your boss comes along and asks just how confident you are in that R-square. You don’t need to mumble apologetically that regression models don’t usually come with confidence intervals around the R-square. Instead, you can rely on the techniques of case resampling and residual resampling described in the module Implementing Bootstrap Methods for Regression Models of the course Implementing Bootstrap Methods in R to silence the naysayers.

Step 9

As a data professional, it's incredibly fulfilling to have your analysis accepted and reflected in changes made to real-world systems. But after the high comes the questions: We made that change, but what next? How do we know whether the change worked or not? 

A great tool in this situation is A/B testing, which you can learn about in the course Building Statistical Summaries with R where the module Implementing Bayesian A/B Testing sets you up with both the detailed theoretical background as well as the practical tools needed to implement this incredibly powerful technique.

Step 10

Choices around which model to use are very important, yet are often made in an ad-hoc fashion. Every data professional has their own favorite tools, and most folks have a tendency to overuse those favorites. As the saying goes: When you’ve got a hammer, everything starts to look like a nail. To avoid this all-too-common behavioural bias, you should strongly consider using a technique called model stacking, which is described in the course Employing Ensemble Methods with scikit-learn in the module Implementing Ensemble Learning Using Model Stacking. Here you will see how you can employ multiple independent models, and then combine their results in a rigorous, data-driven manner. 

- - -

Janani Ravi

You can become data literate

Whether you are a novice, intermediate or advanced individual within the world of data, you can all empower yourself with critical data literacy skills. So, what courses can you take to help yourself become more data literate?

If you are new to data literacy, the following Executive Briefing courses are a good place to start:

Data literacy is a vast topic, encompassing soft and technical skills. These specific Executive Briefing courses are designed to orient and guide you finding better and more actionable insights from your data and translating those insights into meaningful decision making. And even if you’re not an executive, they can be useful for helping you analyze data better or learn to speak the “language” of data better if you’re in a business, marketing or other non-tech role.

Going from data literate to data fluent

Once you’ve taken the Executive Briefings, you can jump into more advanced learning courses within the Pluralsight platform. Before you do, I recommend using the Data Analytics Literacy skill assessment to measure your baseline. From there, you can discover where to go next.

I also highly recommend other courses around data science and communicating with data, such as Communicating Data Insights, as being able to communicate the insight you find—or building your data storytelling muscle—is essential to success with data. Once you’ve mastered data analytics basics, you can expand out to the closely related Data Science Literacy assessment.

Individuals all over the world now have the opportunity to skill up both quickly and immediately. With the future in mind, take the time to invest in yourself and your data skills.

- - - 

Jordan Morrow

You can create a virtualized hacking lab

Do you have some free time?

More importantly, do you want to have some fun while experimenting with different vulnerabilities (and testing your security knowledge) along the way?

If so, it’s time to learn how to set up your own virtualized hacking lab with the Laying the Foundation for Penetration Testing for CompTIA PenTest+ course.

As we all know, new cyberattack techniques are constantly evolving, which requires security professionals to ensure they’re always on the cutting edge. With a virtualized hacking lab, you can create a simulated environment—that contains, for example, a few Windows 2016 and Windows 2019 servers, a Windows 10 system, two intentionally vulnerable web servers and an attacker workstation using Kali Linux—to make sure you’re staying sharp.

By learning and utilizing this lab environment, you’ll be able to test the latest vulnerabilities, tools and techniques in the industry without taking them for risky “test drives” within your own live production environments.

Why you should test your security in virtual environments

One of the best things about sharpening your hacking skills in this type of environment is the concept of being able to roll back systems. Rolling back (also known as snapshots in VMware and VirtualBox and checkpoints in Hyper-V) allows you to undo all the damage caused by your research and experimentation. You’ll never have to feel your stomach drop when you make a mistake again; instead, after setting an initial snapshot, you simply select to roll back the system to its previous snapshot, and within seconds, you’re back as if nothing had ever happened.

Another great feature is the ability to “clone” a system. Need another Windows Server 2019 box fired up to host an app? Easy. Just select to clone an existing virtual system, and in a couple of minutes, you have an exact duplicate ready for you to attack.

The next steps on your pentest journey

If you have a little more time, the Pentest+ path will also teach you concepts that help you think like an attacker, which is a key skill every security professional—even developers—should acquire to secure their networks. You can become your very own Criminal Minds-style profilier, able to aticipate the "criminal" attacker's next move before they make it.

But if I had just five hours to sharpen my security skills, I’d hoodie it up and have some fun creating my own virtualized hacking lab.

- - -

Dale Meredith

You can learn how to lead change

If there is one thing we learned in 2020, it is that change is inevitable. While the discipline of change management has been around for decades, it is often overshadowed by its more popular sibling, project management. 

Project management prepares a change for an organization by ensuring the budget, timeline and scope are all monitored and managed. Change management prepares an organization for a change by overcoming people’s resistance and helping them to adopt the new ways of working. 

Either one without the other is a failure. You need both project and change management to lead a successful change—here's what you can do to get started:

Beginner level

  1. Learn the basics of the discipline in Change Management: Getting Started. Here you will learn what change management is, why it is important and how you can speed up adoption by implementing it in your organization.
  2. Next, explore Understanding Psychology of Change to broaden your fundamental knowledge of the discipline.
  3. Finally, Building a Successful Change Strategy will help you build a culture of change.


Intermediate level

  1. When you are ready to dive into IT-specific examples and download a few templates to help you assess your organization’s change readiness, watch Managing IT: Organizational Change Management. This course will provide you with eleven templates which you can start using in your organization right now.

  2. Finally, Leading Change: The Head, Heart & Hands Approach will help you identify your natural leadership style and help you improve your less dominant styles.

All of these courses, plus change management articles and web sites, can be found in the Change Management and CCMP Resources channel, which is continually being updated. Be sure to follow this channel and the authors of these courses.

- - -

Kevin Miller