Need to make your .NET applications more responsive? Run faster? On today's multicore hardware, async and parallel programming may be the answer. This course (part 2 of 2) discusses the safe and efficient design of asynchronous and parallel .NET applications. It builds upon the introduction provided in part 1 ("Introduction to Asynchronous and Parallel Programming in .NET 4"), offering more details into the inner workings of the Task Parallel Library, the dangers of concurrent execution, and the higher-level abstractions available in the TPL to help you. The course closes by weaving these concepts together and presenting common patterns for building fast, correct parallel software. This course is for anyone working in .NET 4 or Silverlight 5.
Joe focuses on High Performance Computing and .NET languages. Joe has been specializing in Microsoft technologies since 1992, and is well-versed in Microsoft's High-Performance Computing initiative (HPC Server, Compute Cluster Server, MPI, MPI.NET, OpenMP, PFx), web technologies (ASP.NET and Ajax Extensions for ASP.NET), the desktop (WinForms), LINQ, .NET Framework, and its most popular languages (VC++, C#, F# and VB).
Understanding the Dangers of Concurrency Hi, welcome to the course Async and Parallel Programming: Application Design. This is the first lecture, Understanding the Dangers of Concurrency. Please note that this course is really part two of a sequence and builds upon the course introduction to Async and Parallel Programming in. NET 4. If you listened to part one, thank you very much and welcome back. My name is Joe Hummel and I'll be your presenter today. I've been working with parallel computing since the early '90s and earned a PhD in the area in '98. I'm interested in all things parallel including CPU and GPU parallelism, native and managed code, high-performance computing on clusters and up in the cloud et cetera, et cetera. And it's really an exciting time to be active in this field. Assuming you're comfortable with tasks in the task panel library, the agenda for this module is to talk about the dangers inherent in asynchronous and parallel programming. While there are lots of rocks to avoid, it's my contention that as application developers, the main danger you need to worry about are race conditions. So after a quick introduction of the many dangers, the goal here is to discuss race conditions, how to identify them, and how best to avoid them. There are many techniques, some based on locks, others based on lock-free designs. We'll present examples of both, mention other synchronization primitives and then offer you yet another option but at a higher level of abstraction, that of concurrency aware data structures. And we can't do this and really understand it without lots and lots of live demos. Okay, here we go. ( Pause )
Execution Model and Types of Parallelism Hi, welcome to the course Async and Parallel Programming: Application Design. This is the second lecture, Execution Model and Types of Parallelism. Ultimately, our goal is the design of safe and effective parallel applications. But to get there, we first had to talk about the dangers of concurrency, that was lecture one. In here in lecture two, we need to talk about how parallel programs execute in. NET, and the types of parallelism you are likely to identify and focus on in your application. Please note that this course is really part two of a sequence and builds upon a course Introduction to Async and Parallel Programming in. NET4. My name is Joe Hummel and I'll be your presenter today. I've been working with parallel computing since the early '90s and have a PhD in the field. I'm interested in all things parallel including CPU and GPU Parallelism, native and manage code, high performance computing on clusters in the cloud, anything to do with parallelism. Now the agenda for this module is two fold. First, I'd like to discuss the task-based execution model in. NET4, and exactly how tasks are executed by threads which are run on CPU cores. We'll also talk about how and when you might consider customizing this execution model. Second, I want to talk about the most common types of parallelism, data, task, dataflow and embarrassingly parallel. This will help you identify the possible parallelism in your problem domain. Finally, I'd like to discuss some higher level abstractions in the task parallel library that are perfect for taking advantage of data and task parallelism, in particular, Parallel. For, Parallel. Foreach and Parallel. Invoke. Here we go.
Designs and Patterns for Parallel Programming Hi there. It's a beautiful day here in Chicago and a pleasure to be recording the third and final lecture, Designs and Patterns for Parallel Programing. This is the lecture where we pull everything together and discuss high-level solutions to the design of parallel applications. You're viewing the course Async and Parallel Programming: Application Design, which is really part 2 in our sequence. In case you missed it, the first course is entitled introduction to Async and Parallel Programming in. NET 4. My name is Joe Hummel and I'll be your presenter today. I've been working with parallel computing since the early '90s and earned a PhD in the area in 1998. I'm interested in all things parallel including CPU and GPU parallelism, native and managed code and high performance computing on Clusters and in the Cloud. The agenda for the today's module is to present high-level designs for correct high performance software. We'll start by presenting a few design problems for you to think about then we'll jump into the well-known parallel patterns, such as Pipeline And Dataflow, Concurrent Data Structures provided by the Task Parallel Library, the famous Producer-Consumer pattern, MapReduce and Task Local State. I want to talk about Parallel LINQ, Speculative Execution, and then finish of with APM or the Asynchronous Programming Model.