Here’s a tasty tidbit for you – in the just announced vSphere 5.5, we can now create 64TB VMDK files. While much of vSphere 5.5 offers a 2x increase in configuration maximums over vSphere 5.1, the increase in VMDK size is a whopping 32x. (It may actually be a bit less – I’m not sure if the -512B caveat comes into play on the virtual disk size for vSphere 5.5. I didn’t think to test this out in the lab during my beta testing… the max size is actually 62.9TB as space is reserved in the containing VMFS for snapshots, metadata, etc.).
Now just because you can, doesn’t mean you should. I wrote a post a couple months ago about large volume support in vSphere; the arguments I made in my previous post are still valid now. Simple things, like this: a 64TB VMDK restoration would be a heck of a long process in the event of a disaster, and presents a pretty large single point of failure.
While the wisdom of configuring a 64TB VMDK is questionable for many workloads, I do think that the ability to create such a behemoth displays the ever increasing capabilities of the vmkernel and VMFS. Hopefully this means that heap size limitations (https://kb.vmware.com/kb/1004424) are a thing of the past (I haven’t done an in-place upgrade from 5.1 to 5.5, so I don’t know if the VMFS3.MinHeapSizeMB option is automatically adjusted to support more/larger VMDKs – something else I’ll have to test out when I get home from VMworld I guess).
Final thought – Larger than 2TB VMDKs will also allow us to stop using RDMs for large volume support. This helps pave the way for pure NFS environments (RDMs are a block option) and opens the doors for vVol, vSAN, and vFlash support for all of our virtual (software defined) storage.