In this course, you will learn how to make .NET applications even faster by using a variety of techniques that expand upon the "Making .NET Applications Faster" course. You will explore the garbage collector's inner workings and how to use them to your advantage. You will learn about modern CPUs and how to optimize for them using vector instructions and cache optimization techniques. Finally, you will learn about relevant JIT optimizations and .NET Native, a preview .NET optimization technology.
Sasha is the CTO of Sela Group and a Microsoft C# MVP. He specializes in performance optimization, production debugging, distributed/cloud systems, and mobile development. Sasha is also a frequent conference speaker and published book author.
Garbage Collection Internals Hello! This is Sasha Goldshtein from Pluralsight. Welcome to the Making. NET Applications Even Faster course. In this course, you'll learn how to further improve the performance of your. NET applications on both the client and the server side. The course builds upon my previous course, Making. NET Applications Faster, but you don't have to watch the previous course to learn from this one. The course has five modules. The first two modules deal with the CLR garbage collector, which is a very complex piece of software. The GC can make your app crawl or run blazingly fast, and it all depends on whether you understand what makes it tick. You'll learn how the GC works and what coding practices you can apply to make a good or bad use of it. You'll also see several demos that show how to measure and improve garbage collector performance. The third module covers vectorization, which is a form of parallelism. Instead of using multiple CPU cores, we'll use the hidden resources in each core to speed up certain kinds of algorithms often by a factor of four or even eight. There is a new Microsoft library that will help us vectorize programs without having to use C or C++ for performance-sensitive code. The fourth module covers the CPU cache, which is critical for good performance of algorithms. We'll talk about the various kinds of problems that you can run into with the cache and how to avoid these problems. We'll also see that parallelizing code across multiple processors can often create cell problems with cache collisions. Finally, in the fifth module, we'll talk about JIT optimizations and a brand-new preview technology called. NET Native. We'll see how JIT optimizations can be useful but limited in scope and how. NET Native brings C++-like performance to. NET applications.