Running server operations using clusters of either physical or virtual computers is all about improving performance over and above what you could expect from a single, high-powered server. But "improving performance" can mean different things in different contexts. This course, Linux High Availability Cluster Management (LPIC-3 304 Part 2/2), will bring many aspects of performance improvement to light. You'll be introduced to the principles of Linux-based HA and cluster management and the key tools currently in use in real-world environments - including Linux Virtual Server (LVS), HAProxy, Pacemaker, DRBD, OCFS2, and GFS2.You'll learn how to intelligently spread workloads among diverse geographic and demand environments (load balancing). You'll also discover how to provide backup servers that can be quickly brought into service in the event a working node fails (failover). Finally, you'll also learn about optimizing the way your data tier is deployed, or allowing for fault tolerance through loosely coupled architectures. By the end of this course, you will be able to improve and manage many aspects of the performance of your local or cloud Linux deployments, and they'll be more reliable for it.
David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.
Hi. I'm David Clinton and this course on Linux High Availability Cluster Computing is all about making sure the services you offer over the Internet or to clients on either private or public clouds are going to be there when they're needed.
If you don't want your users to come up empty when trying to access your application, then you'll need to add some smart replication and redundancy to the mix. As an added bonus, a well-designed High Availability cluster can also deliver significantly improved performance and reduced network latency.
ISo whether your primary goal is protecting application availability by intelligently spreading your workloads across diverse geographic and demand environments - known as load balancing -
Providing backup servers that can be quickly brought into service in the event a working node fails (failover)
Or optimizing the way your data tier is deployed
Or allowing for fault tolerance through loosely coupled architectures
There's an open source Linux-based software stack that's just right for you
Why not join me for an introduction to how it all works. This is the second and final course in my series covering the LPIC-3 304 certification objectives, following my Linux Server Virtualization course.
To get the most out of this material, you should be pretty comfortable working with Linux file systems, networking, and package management from the command line.