Fast Storage Spaces performance makes Server 2012 R2 impressive
- select the contributor at the end of the page -
Start with a few disks that you thin provision as a storage space and add drives when you need them. You get expensive features like parallel rebuilds and periodic data integrity scanning without paying for expensive storage enclosures. Plus, you can set up and manage them in a friendly wizard, or through PowerShell and System Center.
Getting the most out of Storage Spaces
You expect Microsoft to use its own technology, but the performance the Windows Release team gets out of Storage Spaces is impressive. They have to deal with 720 Pb of data per week, and if the thousands of developers, testers and partners who work on Windows don't have access, Windows 8.1 and Server 2012 R2 aren't going to ship. So it counts as a mission critical system for the company. They have 20 file servers, with 10 GbE connections, and 20 60-bay JBODs with 3 TB 7200 RPM hard drives.
They're planning to add another 20 60-bay JBODs over the next year to cope with testing and releasing the two new operating systems – and they're still saving money over the alternatives, with a cost per terabyte of $450 rather than $1350. That's on top of storage throughput that's twice as fast, even though they've gone from 120 to 20 file servers. Add in deduplication (a lot of those builds are pretty similar) and they have five times the effective capacity.
At TechEd in June 2013, Microsoft announced the new Storage Spaces features in Windows Server 2012 R2, including tiered storage using mainstream SSDs. The operating system also automatically moves ‘hot' data to the SSDs. (If you want, you can also specify particular files to always be on the fast storage). There's also a write-back cache that evens out the short term ‘spikes' in random writes to give you smoother performance.
As Windows Server and Cloud program manager Jeff Woolsey pointed out to us at TechEd, “When you think about storage capacity and performance, everyone tends to get fixated on capacity – and capacity generally isn't the problem. IO is the problem and it has been for a long time.”
It's not just that storage tiering gives you impressive performance; going from 7400 IOPS on spinning disks to 124,000 IOPS with SSDs is a sixteen fold improvement.
But it's also simple. The interface for thin provisioning has improved so you know exactly when the extremely conservative free space algorithm is going to make your 70-percent-full array read-only until you add another drive to make enough space.
“What we want to do is take the complex and make it really easy,” Woolsey claimed. “I'm going to slap in some SSDs and hard drives and enable tiering, enable some policy, set up two-way resiliency, three-way mirroring – and then I left the storage figure it out.”
Does Windows Server 2012 R2 mean you can cancel your next order with EMC?
What really caught the imagination of the admins we spoke to at TechEd was the cost savings. Coming out of the keynote we fell into conversation with folks who were on the way to cancel an EMC order they'd just written up. It was for a pilot program rather than a production system, and they were hugely enthusiastic about the budget savings they could get.
They agreed with Woolsey when he said, “In the past, I would do things on a SAN; now I can set up JBOD and SSD at a fraction of the cost. The cost per IO has been incredibly high. A terabyte of storage costs $3,000 – because that's what it costs me to put it in my SAN.”
Could Storage Spaces routinely displace EMC or NetApp storage in your system? It depends what you're using it for, but reliability and resiliency aren't issues, Woolsey claims. “People think there must be some trade-off, but everything is redundant. The whole point of the architecture is that it's a scaled out file server. If you deploy a mirror, if a disk goes out there could be six or eight drives there so there's no loss of service. And that? That's what a SAN is.”
That's not exactly true, though. You get some extras with a SAN. Use a SAN that supports ODX with Windows Server 2012 and Offloaded Data Transfer means that when you migrate a VM from one place to another the storage it's using never leaves the array, making for exceptional transfer speeds.
There are also things specialized hardware does for specific use cases, like enterprise content management. APIs that handle metadata, retention policies, and e-discovery and audit options need your SAN to run a hardware-optimised OS specialized for storage.
But just as we've taken specialized, high-price hardware for security appliances and firewalls and routers and turned them into software that runs on commodity servers, you can get many of those same features in applications. SharePoint and Exchange have retention policies and e-discovery and audit tools. SharePoint has metadata tools built-in, and more are available from third parties.
Windows Server 2012 Dynamic Access Control (DAC) uses metadata and security roles and keywords and other policy rules you create to classify content – and protects it with encryption, limited access or by applying retention and e-discovery policies. That's not particularly easy to set up and you'll want to look at the third-party content classification tools that you can use to set up DAC. As with so many other things, advanced storage tools are moving from expensive hardware to software solutions on commodity hardware.
Storage Spaces in Server 2012 R2 could be the tipping point for re-evaluating how you choose your storage hardware because now you really have to know why you're paying extra for a specialized storage kit.