I’ve said it before, and I’ll say it again – it’s awesome working for an awesome company that supports the greater EMC, VMware and Cisco communities. The inaugural class of EMC Elect are finding the custom shirts that Clearpath had made for them at EMC World. The shirts are just one of our ways of saying thanks to the EMC Elect for their contributions to the EMC community. If you are one of the EMC Elect, make sure you stop by the Bloggers area at EMC World to pick up your custom shirt!
Storage
EMC DataDomain DDOS 5.3 Released
The Clearpath team is attending EMC’s EMCworld event in Las Vegas this week. EMC is announcing new products and solutions, and we’ll be covering those announcements in a series of posts on the Clearpath Blog. If you’re not already following Clearpath, feel free to subscribe to our RSS feed or fill out the form for email updates on our site. The first update of the week is from Ted Evans, our Baltimore area pre-sales technical consultant. Ted writes about EMC’s DataDomain OS upgrade (DDOS) to version 5.3. 5.3 includes a host of improvements that will improve backup performance using DataDomain Boost and increase the flexibility of your DataDomain backup appliances by consolidating backup and archive functions on the same box.
Check out the feature list for DDOS 5.3 in Ted’s post (https://blog.clearpathsg.com/blog/bid/289811/The-Latest-Software-Update-for-Data-Domain) and stay tuned to Clearpath’s blog for updates throughout the week!
Is Spanning VMDKs Using Windows Dynamic Disks a Good Idea?
I’ve had several folks ask me recently about how to support very large NTFS volumes on vSphere virtualized Windows servers. The current limitation for a VMDK in vSphere 5.1 is 2TB minus 512B. FWIW, a Hyper-V virtual disk in Windows Server 2012 can be up to 64TB. Those asking the question want to support NTFS volumes greater than 2TB for a variety of purposes – Exchange databases, SQL databases, and file shares. Windows (depending on the version and edition) can theoretically support NTFS volumes up to 256TB (depending on cluster size and assuming GPT), with files up to 256TB in size (see https://en.wikipedia.org/wiki/NTFS for more). The way that they wanted to solve the limitation is to present multiple 2TB VMDKs to a Windows VM, and then use Windows Logical Disk Manager (LDM) to convert the VMDKs to dynamic disks (from the Windows default Basic Disk), then concatenate or span multiple disk partitions into one large NTFS volume. Talk about a monster VM…. The question to me, then, became this: Is using spanned dynamic disks on multiple VMDKs a good idea? Here are some of my thought on the question.
First, there are no right or wrong answers. How you choose to support big data/large disk requirements will be a mix of preference, manageability, performance, recoverability, and fault domain considerations. These considerations will be at a few different levels – storage array, VMware, Guest OS, guest Application, and backup systems. A large spanned Windows volume can offer some simplified management – you might not have to worry as much about running out of space, or junior engineers having to think about where to place data in the guest OS. I tend to avoid using LVM/Spanned Windows Dynamic Disks within VM’s when possible for a variety of reasons – here are some of my considerations (for a variety of systems – Exchange, SQL, file servers, etc.):
Performance
- Some applications, such as Microsoft SQL, can benefit from having more, smaller disks with multiple files in a database filegroup. Having different files on different disks, on different vSCSI controllers can increase SQL’s ability to do asynchronous parallel IO. Microsoft’s recommendation for SQL (https://technet.microsoft.com/library/Cc966534) is to have between .25 to 1 data files per filegroup per core, with each file a on different drive/LUN. So a 8 vCPU SQL server would have between 2 and 8 .mdf/.ndf files on an equal number of drives. This lends itself to more, smaller VMDK’s that are not striped or spanned by Windows. This requires a bit of design work within the database, optimizing your table and index structures to span multiple files in a file group.
- Smaller, purpose built files/volumes/LUNS can be placed on the right storage tier with the best caching mechanism (e.g. SQL log volumes placed on RAID1/0 with more write cache availability).
- A single volume may have a limited queue depth. You’ll probably increase queuing capabilities as you scale out the number of VMDK’s, and Windows will be able to drive more IO as additional disk channels are opened up.
- A greater number of virtual disks spread over different VMFS datastores may increase the number of paths used to service the workload. This may allow for increased storage bandwidth, more in-path cache, and more storage processor efficiency.
Manageability
- By using Dynamic Disk striping, spanning or software RAID within the guest, you are introducing an extra layer of complexity that you will need to keep in mind while performing operations on the VM/VMDK. A storage operation on an array, LUN, VMFS datastore, or VMDK within your guest-striped volume could take the whole volume down.
- Having smaller, purpose-built VMDK’s allows you to move specific parts of your workload to a physical storage tier that best suits it. Putting everything into one monolithic volume doesn’t allow this level of granularity. For example, I might create a smaller Exchange mailbox database and put executives mailboxes in it. I would then place the mailbox database in a VMDK on a VMFS on a LUN on a high tier or replicated disk (great use case for VASA Profile Driven Storage BTW). The interns mailbox datastore would be placed on the lowest tier of non-replicated storage. This configuration would also lend itself to more targeted and efficient backup schemes.
- This Microsoft TechNet article for Exchange 2013 storage architecture (https://technet.microsoft.com/en-us/library/ee832792.aspx) suggests that using GPT Basic Disks is a best practice, although Dynamic disks are supported. Conversely, you could deduce that using spanned dynamic disks is not best practice. The TechNet article also recommends keeping your Exchange mailbox databases (MDB) under 200GB, so there’s no need for a VMDK over 2TB is you’re following best practices.
Fault Domains [Read more…] about Is Spanning VMDKs Using Windows Dynamic Disks a Good Idea?
Configuring VMware VASA for EMC VNX
vSphere Storage APIs for Storage Awareness (VASA) are one of several VMware vSphere Storage APIs. VASA, new in vSphere 5.0, provides vCenter with a way of interrogating storage array LUNs and associated datastores to gain visibility into the underlying hardware and configuration of the storage layer. Storage capabilities, such as RAID level, thin or thick LUN provisioning, replication state, caching mechanisms, and auto-tiering are presented through VASA to vCenter (a unidirectional read operation by vCenter against the array). With VASA, vCenter can identify which datastores possess certain capabilities. By associating a VM – or specific virtual disks within a VM – to storage profiles, we can begin to take advantage of VMware’s Profile Driven Storage capabilities. With VASA helping to guide VM placement, IT can deliver a higher quality of service to match SLA’s.
A few examples of how using VASA can help IT guarantee SLAs are:
- A user-defined storage profile defined for ‘High Speed Sequential Write’ could be associated with a VMDK used for database logging. This same profile would be assigned to VMFS datastores based on RAID10, with ample write cache.
- VM’s running critical applications could be associated with a storage profile for ‘Synchronous Replication’. Datastores protected by a SAN-based replication package (such as EMC SRDF or EMC RecoverPoint) would be assigned this profile to guarantee replication of VM’s on the datastore. VMware SRM would then be used to guarantee crash and application consistency, and automated failover/back capabilities.
- Test/Dev VM’s could be associated with a storage profile for lower tiered disk without a flash based caching mechanism (i.e. EMC FAST Cache) to keep low priority machines from consuming expensive disk and cache.
- A cloud provider configures multiple tiers of storage in a gold/silver/bronze fashion and assigns appropriate storage profiles to the datastores. Customers choose which tier they want (based on cost vs. performance) and have VM’s automatically provisioned on the correct storage tier. This can be done in vCenter or in vCloud Director!
VASA-enabled profile driven storage can be combined with vSphere Storage DRS for automated capacity and performance (IOPS) load balancing of like-datastores. Greater degrees of automation decrease risk while improving SLA’s. Taken one step further, VMware’s forthcoming vVols technology will basically create a bidirectional VASA capability, where a VM can tell the underlying storage what performance, features, and capabilities it requires and the storage array will automatically create a VMDK on itself to match the demands from the VM.
EMC VNX fully supports the current version of VASA in vSphere 5.1. To give you an idea of what data can be seen through VASA, here are the storage capabilities exposed [Read more…] about Configuring VMware VASA for EMC VNX
EMC Releases VNXe Update with SRM, Writeable Snapshots, Encryption and More!
EMC released VNXe Operating Environment MR4 (version 2.4.20932) on January 7th. The upgrade brings a few significant enhancements to the VNXe platform. Before I cover the new features, let’s have a quick look at the VNXe series for those not familiar with it.
The EMC VNXe Series, a part of EMC’s VNX family, is an affordable unified storage platform designed for smaller businesses. The unified platform provides both file and block services – iSCSI on block, and CIFS and NFS on file, with a nice base feature set including File Dedupe, Block Compression, Thin Provisioning, and Snapshots, all managed through a very easy to navigate, wizard-driven Unisphere management package. Additional software packs are available for the VNXe that enable remote replication, file-level retention, and application-aware replication. Host connectivity includes 1GbE and 10GbE.
The VNXe series includes the VNXe 3150 model and the VNXe 3300. VNXe 3150 utilizes quad core CPUs and offers up to 25 2.5″ drives in the 2U rack mount form factor or up to 12 3.5″ drives in the 2U rack mount form factor. Both models support the SAS, Near-line SAS, and EFD drive technologies. The VNXe 3100 supports up to 100 drives on a dual Storage Processor (SP) configuration, up to 192TB of raw capacity. On VNXe 3300 systems, you can have up to 150 drives with a maximum raw capacity of 240TB.
The VNXe is a good platform for smaller VMware environments, including remote/branch offices. VNXe is also great for Microsoft Exchange, Hyper-V, and SMB file share – all easily carved, configured, and presented to hosts using Unisphere wizards. The VNXe is also included in smaller configurations of the EMC VSPEX for VMware View 5.1 virtual desktops.
With the 2.4.20932 (MR4) release of the VNXe OE code, the VNXe gains several new features. Let’s look at these new features and what they mean for VNXe customers.
First is support for Self-Encrypting Drives (SEDs). A VNXe purchased with SEDs provides Data at Rest encryption of all data on the array. Encryption is transparent and automatic, with no noticeable performance impact. This is a great feature for environments where Data at Rest encryption is required, such as medical facilities with HIPAA/HITECH requirements, government environments, and financial institutions with PCI DSS requirements.
Another great feature added in MR4 is support for Windows 8 / SMB 3.0. With SMB 3.0 support, the VNXe gains continuous availability for CIFS (multi-network path access, synchronous writes, transparent server failover, etc.) and CIFS encryption for data-in-flight between the array and host. SMB3.0 also provides support for larger IO sizes, offload copy, and parallel IO to improve performance and responsiveness to the user.
A VNXe feature long desired by VMware shops is support for VMware Site Recovery Manager (SRM). MR4 introduces SRM support for VMware NFS datastores. A Storage Replication Adapter (SRA) is forthcoming, pending VMware certification testing. SRM support provides additional options for providing a highly available, disaster resistant VMware environment, and the VNXe becomes a great target for the SRM DR site.
Finally, the VNXe now supports some new RAID configurations. The VNXe aggregates multiple RAID groups to create a storage pool. LUNs are carved out of storage pools and presented to hosts. The VNXe was a bit limited in RAID group configuration causing usable capacity to be a bit less than expected and configurations to be a bit cumbersome. Prior to MR4, the VNXe supported the following RAID configurations in each pool type:
With MR4, VNXe now supports a few new RAID configurations:
- RAID 5 (10+1) for SAS and NL-SAS
- RAID 6 (10+2) for NL-SAS
These new RAID configurations fit well into a single 12-disk DAE (with room for a hot spare in the new 10+1 RAID5 configuration). With larger RAID sets underlying VNXe Storage Pools there will be an increase in usable capacity, while delivering roughly the same performance. A larger RAID group may set you up for greater risk of data loss due to a higher likelihood of a dual-drive failure within a RAID group, so check with a qualified storage architect before you use the new configuration options.
All in all, MR4 for the VNXe builds on a solid base to provide highly available, secure, and easily managed storage for a variety of workloads in smaller IT environments. If you have additional questions on the VNX family of storage, visit us here or give us a shout on Twitter @clearpathsg.