Description
Course info
Rating
(126)
Level
Beginner
Updated
May 15, 2015
Duration
1h 37m
Description

Study upon study confirms that web performance has a direct correlation to revenue, operating costs, and search engine ranking. With this in mind, we all want our applications to be faster, but how do we know what bottlenecks to focus on? This course will cover how to leverage various browser APIs to capture your application's live performance data, understand what the metrics mean, and focus on the ones you should really care about. Along the way you'll learn how to monitor real users, understand when to use synthetic testing tools, and automate performance tracking.

About the author
About the author

Nik Molnar is a Microsoft MVP and co-founder of Glimpse, an open source diagnostics and debugging tool.

More from the author
Progressive Web App Fundamentals
Intermediate
2h 56m
8 May 2017
Section Introduction Transcripts
Section Introduction Transcripts

Why Does Performance Matter?
Hello, my name is Nik Molnar and I'd like to thank you for tuning in to this Pluralsight course called Tracking Real World Web Performance. Before we begin, let me quickly tell you what this course is about and what you should be expecting.

Thinking About Performance
This is Module 2 of tracking real world web performance called, Thinking About Performance. In the first module, we learned about why web performance matters to our businesses and users. We also briefly heeded Donald Knuth's warning about premature optimization. In this module, we'll look at a decision-making framework that can help guide our performance optimization campaigns and I'll introduce you to the sample application that we'll use throughout this course.

Understanding Individual Metrics
Hi, my name is Nik Molnar. Welcome to Module 3 of tracking real world web performance titled, Understanding Individual Metrics. In the last module, we learned all about Colonel John Boyd and his OODA Loop, a strategic, decision-making framework leveraged by the U. S. Military. In this module, we'll dive into the observation phase of the OODA Loop. In general, observation is all about measuring. And to measure, we use several different types of metrics. So, for the remainder of this course, we're going to learn about many different web performance metrics, what each one of them means, and when they are and aren't useful.

Handling Collections of Metrics
Hello, I'm Nik Molnar and this is Handling Collections of Metrics, the fourth module in the Tracking Real World Web Performance course. Up to this point, we have covered why performance matters for web applications and how the OODA Loop can be used for making strategic decisions. The last module focused on observing web performance with various metrics. And in this module, we'll quickly look at the best ways to orient ourselves around a large collection of metrics data.

Gathering Metrics
Hi. Thanks so much for watching Tracking Real World Web Performance and for tuning in to Module 5, which I've named Gathering Metrics. The last two modules have prepared you to understand the types of metrics available when tracking web performance and how to analyze the data you collect with a keen, statistical eye. In this module, we'll learn about several various tools to gather performance data and, perhaps more importantly, we'll cover the two main techniques that are used to test application performance and understand when to use each of them.

Automating Metrics Gathering
Hello, I'm Nik Molnar and you're watching, Automating Metrics Gathering, the sixth module in this Real World Web Performance course. In the previous module, we covered two techniques for gathering metrics. First up was real user monitoring, also known as RUM, which leverages various APIs built into modern day browsers to record the experiences of real users and send them back to our servers for further analysis. Second, we covered synthetic testing, which leverages instrumented browsers in clean lab environments to record deep performance diagnostics information. We also looked at a few synthetic testing tools, but the main one was webpage test. In this module, we'll extend our usage of these tools so that they can be automated with Node. js, NPM, and Grunt scripts and be added to our build or continuous integration processes.

What Should We Be Shooting For?
Hello, I'm Nik Molnar and this is the final module of Tracking Real World Web Performance, which I've titled, What Should We Be Shooting For? In the last module, we learned about the performance budget process, as well as tooling that we can use to automatically enforce performance budgets. But we skirted around the question, What should our budget be? So in this module, we'll focus in on the answer to that by looking at a few foundational usability studies and add a way you can find out how your site stacks up against the competition.